Sample records for validate results obtained

  1. 42 CFR 476.84 - Changes as a result of DRG validation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 4 2014-10-01 2014-10-01 false Changes as a result of DRG validation. 476.84... DRG validation. A provider or practitioner may obtain a review by a QIO under part 473 of this chapter... result of QIO validation activities. ...

  2. 42 CFR 476.84 - Changes as a result of DRG validation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Changes as a result of DRG validation. 476.84... § 476.84 Changes as a result of DRG validation. A provider or practitioner may obtain a review by a QIO under part 473 of this chapter for changes in diagnostic and procedural coding that resulted in a change...

  3. 42 CFR 476.84 - Changes as a result of DRG validation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false Changes as a result of DRG validation. 476.84... § 476.84 Changes as a result of DRG validation. A provider or practitioner may obtain a review by a QIO under part 473 of this chapter for changes in diagnostic and procedural coding that resulted in a change...

  4. 42 CFR 476.84 - Changes as a result of DRG validation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false Changes as a result of DRG validation. 476.84... § 476.84 Changes as a result of DRG validation. A provider or practitioner may obtain a review by a QIO under part 473 of this chapter for changes in diagnostic and procedural coding that resulted in a change...

  5. 42 CFR 476.84 - Changes as a result of DRG validation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Changes as a result of DRG validation. 476.84... § 476.84 Changes as a result of DRG validation. A provider or practitioner may obtain a review by a QIO under part 473 of this chapter for changes in diagnostic and procedural coding that resulted in a change...

  6. Double Cross-Validation in Multiple Regression: A Method of Estimating the Stability of Results.

    ERIC Educational Resources Information Center

    Rowell, R. Kevin

    In multiple regression analysis, where resulting predictive equation effectiveness is subject to shrinkage, it is especially important to evaluate result replicability. Double cross-validation is an empirical method by which an estimate of invariance or stability can be obtained from research data. A procedure for double cross-validation is…

  7. Validation of Observations Obtained with a Liquid Mirror Telescope by Comparison with Sloan Digital Sky Survey Observations

    NASA Astrophysics Data System (ADS)

    Borra, E. F.

    2015-06-01

    The results of a search for peculiar astronomical objects using very low resolution spectra obtained with the NASA Orbital Debris Observatory (NODO) 3 m diameter liquid mirror telescope (LMT) are compared with results of spectra obtained with the Sloan Digital Sky Survey (SDSS). The main purpose of this comparison is to verify whether observations taken with this novel type of telescope are reliable. This comparison is important because LMTs are an inexpensive novel type of telescope that is very useful for astronomical surveys, particularly surveys in the time domain, and validation of the data taken with an LMT by comparison with data from a classical telescope will validate their reliability. We start from a published data analysis that classified as peculiar only 206 of the 18,000 astronomical objects observed with the NODO LMT. A total of 29 of these 206 objects were found in the SDSS. The reliability of the NODO data can be seen through the results of the detailed analysis that, in practice, incorrectly identified less than 0.3% of the 18,000 spectra as peculiar objects, most likely because they are variable stars. We conclude that the LMT gave reliable observations, comparable to those that would have been obtained with a telescope using a glass mirror.

  8. Validity and reproducibility of cephalometric measurements obtained from digital photographs of analogue headfilms.

    PubMed

    Grybauskas, Simonas; Balciuniene, Irena; Vetra, Janis

    2007-01-01

    The emerging market of digital cephalographs and computerized cephalometry is overwhelming the need to examine the advantages and drawbacks of manual cephalometry, meanwhile, small offices continue to benefit from the economic efficacy and ease of use of analogue cephalograms. The use of modern cephalometric software requires import of digital cephalograms or digital capture of analogue data: scanning and digital photography. The validity of digital photographs of analogue headfilms rather than original headfilms in clinical practice has not been well established. Digital photography could be a fast and inexpensive method of digital capture of analogue cephalograms for use in digital cephalometry. The objective of this study was to determine the validity and reproducibility of measurements obtained from digital photographs of analogue headfilms in lateral cephalometry. Analogue cephalometric radiographs were performed on 15 human dry skulls. Each of them was traced on acetate paper and photographed three times independently. Acetate tracings and digital photographs were digitized and analyzed in cephalometric software. Linear regression model, paired t-test intergroup analysis and coefficient of repeatability were used to assess validity and reproducibility for 63 angular, linear and derivative measurements. 54 out of 63 measurements were determined to have clinically acceptable reproducibility in the acetate tracing group as well as 46 out of 63 in the digital photography group. The worst reproducibility was determined for measurements dependent on landmarks of incisors and poorly defined outlines, majority of them being angular measurements. Validity was acceptable for all measurements, and although statistically significant differences between methods existed for as many as 15 parameters, they appeared to be clinically insignificant being smaller than 1 unit of measurement. Validity was acceptable for 59 of 63 measurements obtained from digital photographs

  9. Reliability and validity of pendulum test measures of spasticity obtained with the Polhemus tracking system from patients with chronic stroke

    PubMed Central

    Bohannon, Richard W; Harrison, Steven; Kinsella-Shaw, Jeffrey

    2009-01-01

    Background Spasticity is a common impairment accompanying stroke. Spasticity of the quadriceps femoris muscle can be quantified using the pendulum test. The measurement properties of pendular kinematics captured using a magnetic tracking system has not been studied among patients who have experienced a stroke. Therefore, this study describes the test-retest reliability and known groups and convergent validity of the pendulum test measures obtained with the Polhemus tracking system. Methods Eight patients with chronic stroke underwent pendulum tests with their affected and unaffected lower limbs, with and without the addition of a 2.2 kg cuff weight at the ankle, using the Polhemus magnetic tracking system. Also measured bilaterally were knee resting angles, Ashworth scores (grades 0–4) of quadriceps femoris muscles, patellar tendon (knee jerk) reflexes (grades 0–4), and isometric knee extension force. Results Three measures obtained from pendular traces of the affected side were reliable (intraclass correlation coefficient ≥ .844). Known groups validity was confirmed by demonstration of a significant difference in the measurements between sides. Convergent validity was supported by correlations ≥ .57 between pendulum test measures and other measures reflective of spasticity. Conclusion Pendulum test measures obtained with the Polhemus tracking system from the affected side of patients with stroke have good test-retest reliability and both known groups and convergent validity. PMID:19642989

  10. Validity and Reliability of Scores Obtained on Multiple-Choice Questions: Why Functioning Distractors Matter

    ERIC Educational Resources Information Center

    Ali, Syed Haris; Carr, Patrick A.; Ruit, Kenneth G.

    2016-01-01

    Plausible distractors are important for accurate measurement of knowledge via multiple-choice questions (MCQs). This study demonstrates the impact of higher distractor functioning on validity and reliability of scores obtained on MCQs. Freeresponse (FR) and MCQ versions of a neurohistology practice exam were given to four cohorts of Year 1 medical…

  11. Noninvasive assessment of mitral inertness: clinical results with numerical model validation

    NASA Technical Reports Server (NTRS)

    Firstenberg, M. S.; Greenberg, N. L.; Smedira, N. G.; McCarthy, P. M.; Garcia, M. J.; Thomas, J. D.

    2001-01-01

    Inertial forces (Mdv/dt) are a significant component of transmitral flow, but cannot be measured with Doppler echo. We validated a method of estimating Mdv/dt. Ten patients had a dual sensor transmitral (TM) catheter placed during cardiac surgery. Doppler and 2D echo was performed while acquiring LA and LV pressures. Mdv/dt was determined from the Bernoulli equation using Doppler velocities and TM gradients. Results were compared with numerical modeling. TM gradients (range: 1.04-14.24 mmHg) consisted of 74.0 +/- 11.0% inertial forcers (range: 0.6-12.9 mmHg). Multivariate analysis predicted Mdv/dt = -4.171(S/D (RATIO)) + 0.063(LAvolume-max) + 5. Using this equation, a strong relationship was obtained for the clinical dataset (y=0.98x - 0.045, r=0.90) and the results of numerical modeling (y=0.96x - 0.16, r=0.84). TM gradients are mainly inertial and, as validated by modeling, can be estimated with echocardiography.

  12. Results from an Independent View on The Validation of Safety-Critical Space Systems

    NASA Astrophysics Data System (ADS)

    Silva, N.; Lopes, R.; Esper, A.; Barbosa, R.

    2013-08-01

    The Independent verification and validation (IV&V) has been a key process for decades, and is considered in several international standards. One of the activities described in the “ESA ISVV Guide” is the independent test verification (stated as Integration/Unit Test Procedures and Test Data Verification). This activity is commonly overlooked since customers do not really see the added value of checking thoroughly the validation team work (could be seen as testing the tester's work). This article presents the consolidated results of a large set of independent test verification activities, including the main difficulties, results obtained and advantages/disadvantages for the industry of these activities. This study will support customers in opting-in or opting-out for this task in future IV&V contracts since we provide concrete results from real case studies in the space embedded systems domain.

  13. Local Validation of Global Estimates of Biosphere Properties: Synthesis of Scaling Methods and Results Across Several Major Biomes

    NASA Technical Reports Server (NTRS)

    Cohen, Warren B.; Wessman, Carol A.; Aber, John D.; VanderCaslte, John R.; Running, Steven W.

    1998-01-01

    To assist in validating future MODIS land cover, LAI, IPAR, and NPP products, this project conducted a series of prototyping exercises that resulted in enhanced understanding of the issues regarding such validation. As a result, we have several papers to appear as a special issue of Remote Sensing of Environment in 1999. Also, we have been successful at obtaining a follow-on grant to pursue actual validation of these products over the next several years. This document consists of a delivery letter, including a listing of published papers.

  14. Measurements using orthodontic analysis software on digital models obtained by 3D scans of plaster casts : Intrarater reliability and validity.

    PubMed

    Czarnota, Judith; Hey, Jeremias; Fuhrmann, Robert

    2016-01-01

    The purpose of this work was to determine the reliability and validity of measurements performed on digital models with a desktop scanner and analysis software in comparison with measurements performed manually on conventional plaster casts. A total of 20 pairs of plaster casts reflecting the intraoral conditions of 20 fully dentate individuals were digitized using a three-dimensional scanner (D700; 3Shape). A series of defined parameters were measured both on the resultant digital models with analysis software (Ortho Analyzer; 3Shape) and on the original plaster casts with a digital caliper (Digimatic CD-15DCX; Mitutoyo). Both measurement series were repeated twice and analyzed for intrarater reliability based on intraclass correlation coefficients (ICCs). The results from the digital models were evaluated for their validity against the casts by calculating mean-value differences and associated 95 % limits of agreement (Bland-Altman method). Statistically significant differences were identified via a paired t test. Significant differences were obtained for 16 of 24 tooth-width measurements, for 2 of 5 sites of contact-point displacement in the mandibular anterior segment, for overbite, for maxillary intermolar distance, for Little's irregularity index, and for the summation indices of maxillary and mandibular incisor width. Overall, however, both the mean differences between the results obtained on the digital models versus on the plaster casts and the dispersion ranges associated with these differences suggest that the deviations incurred by the digital measuring technique are not clinically significant. Digital models are adequately reproducible and valid to be employed for routine measurements in orthodontic practice.

  15. Guideline for obtaining valid consent for gastrointestinal endoscopy procedures.

    PubMed

    Everett, Simon M; Griffiths, Helen; Nandasoma, U; Ayres, Katie; Bell, Graham; Cohen, Mike; Thomas-Gibson, Siwan; Thomson, Mike; Naylor, Kevin M T

    2016-10-01

    Much has changed since the last guideline of 2008, both in endoscopy and in the practice of obtaining informed consent, and it is vital that all endoscopists who are responsible for performing invasive and increasingly risky procedures are aware of the requirements for obtaining valid consent. This guideline is restricted to GI endoscopy but we cover elective and acute or emergency procedures. Few clinical trials have been carried out in relation to informed consent but most areas are informed by guidance from the General Medical Counsel (GMC) and/or are enshrined in legislation. Following an iterative voting process a series of recommendations have been drawn up that cover the majority of situations that will be encountered by endoscopists. This is not exhaustive and where doubt exists we have described where legal advice is likely to be required. This document relates to the law and endoscopy practice in the UK-where there is variation between the four devolved countries this is pointed out and endoscopists must be aware of the law where they practice. The recommendations are divided into consent for patients with and without capacity and we provide sections on provision of information and the consent process for patients in a variety of situations. This guideline is intended for use by all practitioners who request or perform GI endoscopy, or are involved in the pathway of such patients. If followed, we hope this document will enhance the experience of patients attending for endoscopy in UK units. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  16. Batch Effect Confounding Leads to Strong Bias in Performance Estimates Obtained by Cross-Validation

    PubMed Central

    Delorenzi, Mauro

    2014-01-01

    Background With the large amount of biological data that is currently publicly available, many investigators combine multiple data sets to increase the sample size and potentially also the power of their analyses. However, technical differences (“batch effects”) as well as differences in sample composition between the data sets may significantly affect the ability to draw generalizable conclusions from such studies. Focus The current study focuses on the construction of classifiers, and the use of cross-validation to estimate their performance. In particular, we investigate the impact of batch effects and differences in sample composition between batches on the accuracy of the classification performance estimate obtained via cross-validation. The focus on estimation bias is a main difference compared to previous studies, which have mostly focused on the predictive performance and how it relates to the presence of batch effects. Data We work on simulated data sets. To have realistic intensity distributions, we use real gene expression data as the basis for our simulation. Random samples from this expression matrix are selected and assigned to group 1 (e.g., ‘control’) or group 2 (e.g., ‘treated’). We introduce batch effects and select some features to be differentially expressed between the two groups. We consider several scenarios for our study, most importantly different levels of confounding between groups and batch effects. Methods We focus on well-known classifiers: logistic regression, Support Vector Machines (SVM), k-nearest neighbors (kNN) and Random Forests (RF). Feature selection is performed with the Wilcoxon test or the lasso. Parameter tuning and feature selection, as well as the estimation of the prediction performance of each classifier, is performed within a nested cross-validation scheme. The estimated classification performance is then compared to what is obtained when applying the classifier to independent data. PMID:24967636

  17. Results from SMAP Validation Experiments 2015 and 2016

    NASA Astrophysics Data System (ADS)

    Colliander, A.; Jackson, T. J.; Cosh, M. H.; Misra, S.; Crow, W.; Powers, J.; Wood, E. F.; Mohanty, B.; Judge, J.; Drewry, D.; McNairn, H.; Bullock, P.; Berg, A. A.; Magagi, R.; O'Neill, P. E.; Yueh, S. H.

    2017-12-01

    NASA's Soil Moisture Active Passive (SMAP) mission was launched in January 2015. The objective of the mission is global mapping of soil moisture and freeze/thaw state. Well-characterized sites with calibrated in situ soil moisture measurements are used to determine the quality of the soil moisture data products; these sites are designated as core validation sites (CVS). To support the CVS-based validation, airborne field experiments are used to provide high-fidelity validation data and to improve the SMAP retrieval algorithms. The SMAP project and NASA coordinated airborne field experiments at three CVS locations in 2015 and 2016. SMAP Validation Experiment 2015 (SMAPVEX15) was conducted around the Walnut Gulch CVS in Arizona in August, 2015. SMAPVEX16 was conducted at the South Fork CVS in Iowa and Carman CVS in Manitoba, Canada from May to August 2016. The airborne PALS (Passive Active L-band Sensor) instrument mapped all experiment areas several times resulting in 30 coincidental measurements with SMAP. The experiments included intensive ground sampling regime consisting of manual sampling and augmentation of the CVS soil moisture measurements with temporary networks of soil moisture sensors. Analyses using the data from these experiments have produced various results regarding the SMAP validation and related science questions. The SMAPVEX15 data set has been used for calibration of a hyper-resolution model for soil moisture product validation; development of a multi-scale parameterization approach for surface roughness, and validation of disaggregation of SMAP soil moisture with optical thermal signal. The SMAPVEX16 data set has been already used for studying the spatial upscaling within a pixel with highly heterogeneous soil texture distribution; for understanding the process of radiative transfer at plot scale in relation to field scale and SMAP footprint scale over highly heterogeneous vegetation distribution; for testing a data fusion based soil moisture

  18. Reliability and validity of pendulum test measures of spasticity obtained with the Polhemus tracking system from patients with chronic stroke.

    PubMed

    Bohannon, Richard W; Harrison, Steven; Kinsella-Shaw, Jeffrey

    2009-07-30

    Spasticity is a common impairment accompanying stroke. Spasticity of the quadriceps femoris muscle can be quantified using the pendulum test. The measurement properties of pendular kinematics captured using a magnetic tracking system has not been studied among patients who have experienced a stroke. Therefore, this study describes the test-retest reliability and known groups and convergent validity of the pendulum test measures obtained with the Polhemus tracking system. Eight patients with chronic stroke underwent pendulum tests with their affected and unaffected lower limbs, with and without the addition of a 2.2 kg cuff weight at the ankle, using the Polhemus magnetic tracking system. Also measured bilaterally were knee resting angles, Ashworth scores (grades 0-4) of quadriceps femoris muscles, patellar tendon (knee jerk) reflexes (grades 0-4), and isometric knee extension force. Three measures obtained from pendular traces of the affected side were reliable (intraclass correlation coefficient > or = .844). Known groups validity was confirmed by demonstration of a significant difference in the measurements between sides. Convergent validity was supported by correlations > or = .57 between pendulum test measures and other measures reflective of spasticity. Pendulum test measures obtained with the Polhemus tracking system from the affected side of patients with stroke have good test-retest reliability and both known groups and convergent validity.

  19. The reliability and validity of a three-camera foot image system for obtaining foot anthropometrics.

    PubMed

    O'Meara, Damien; Vanwanseele, Benedicte; Hunt, Adrienne; Smith, Richard

    2010-08-01

    The purpose was to develop a foot image capture and measurement system with web cameras (the 3-FIS) to provide reliable and valid foot anthropometric measures with efficiency comparable to that of the conventional method of using a handheld anthropometer. Eleven foot measures were obtained from 10 subjects using both methods. Reliability of each method was determined over 3 consecutive days using the intraclass correlation coefficient and root mean square error (RMSE). Reliability was excellent for both the 3-FIS and the handheld anthropometer for the same 10 variables, and good for the fifth metatarsophalangeal joint height. The RMSE values over 3 days ranged from 0.9 to 2.2 mm for the handheld anthropometer, and from 0.8 to 3.6 mm for the 3-FIS. The RMSE values between the 3-FIS and the handheld anthropometer were between 2.3 and 7.4 mm. The 3-FIS required less time to collect and obtain the final variables than the handheld anthropometer. The 3-FIS provided accurate and reproducible results for each of the foot variables and in less time than the conventional approach of a handheld anthropometer.

  20. Noninvasive assessment of mitral inertness [correction of inertance]: clinical results with numerical model validation.

    PubMed

    Firstenberg, M S; Greenberg, N L; Smedira, N G; McCarthy, P M; Garcia, M J; Thomas, J D

    2001-01-01

    Inertial forces (Mdv/dt) are a significant component of transmitral flow, but cannot be measured with Doppler echo. We validated a method of estimating Mdv/dt. Ten patients had a dual sensor transmitral (TM) catheter placed during cardiac surgery. Doppler and 2D echo was performed while acquiring LA and LV pressures. Mdv/dt was determined from the Bernoulli equation using Doppler velocities and TM gradients. Results were compared with numerical modeling. TM gradients (range: 1.04-14.24 mmHg) consisted of 74.0 +/- 11.0% inertial forcers (range: 0.6-12.9 mmHg). Multivariate analysis predicted Mdv/dt = -4.171(S/D (RATIO)) + 0.063(LAvolume-max) + 5. Using this equation, a strong relationship was obtained for the clinical dataset (y=0.98x - 0.045, r=0.90) and the results of numerical modeling (y=0.96x - 0.16, r=0.84). TM gradients are mainly inertial and, as validated by modeling, can be estimated with echocardiography.

  1. 10 CFR 26.139 - Reporting initial validity and drug test results.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 1 2014-01-01 2014-01-01 false Reporting initial validity and drug test results. 26.139... § 26.139 Reporting initial validity and drug test results. (a) The licensee testing facility shall... permitted under § 26.75(h), positive test results from initial drug tests at the licensee testing facility...

  2. 10 CFR 26.139 - Reporting initial validity and drug test results.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 1 2012-01-01 2012-01-01 false Reporting initial validity and drug test results. 26.139... § 26.139 Reporting initial validity and drug test results. (a) The licensee testing facility shall... permitted under § 26.75(h), positive test results from initial drug tests at the licensee testing facility...

  3. Techniques for obtaining subjective response to vertical vibration

    NASA Technical Reports Server (NTRS)

    Clarke, M. J.; Oborne, D. J.

    1975-01-01

    Laboratory experiments were performed to validate the techniques used for obtaining ratings in the field surveys carried out by the University College of Swansea. In addition, attempts were made to evaluate the basic form of the human response to vibration. Some of the results obtained by different methods are described.

  4. 42 CFR 478.15 - QIO review of changes resulting from DRG validation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 4 2014-10-01 2014-10-01 false QIO review of changes resulting from DRG validation... review of changes resulting from DRG validation. (a) General rules. (1) A provider or practitioner... validation under section 1866(a)(1)(F) of the Act is entitled to a review of that change if— (i) The change...

  5. 42 CFR 478.15 - QIO review of changes resulting from DRG validation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false QIO review of changes resulting from DRG validation... review of changes resulting from DRG validation. (a) General rules. (1) A provider or practitioner... validation under section 1866(a)(1)(F) of the Act is entitled to a review of that change if— (i) The change...

  6. 42 CFR 478.15 - QIO review of changes resulting from DRG validation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false QIO review of changes resulting from DRG validation... review of changes resulting from DRG validation. (a) General rules. (1) A provider or practitioner... validation under section 1866(a)(1)(F) of the Act is entitled to a review of that change if— (i) The change...

  7. 42 CFR 478.15 - QIO review of changes resulting from DRG validation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false QIO review of changes resulting from DRG validation... review of changes resulting from DRG validation. (a) General rules. (1) A provider or practitioner... validation under section 1866(a)(1)(F) of the Act is entitled to a review of that change if— (i) The change...

  8. 42 CFR 493.571 - Disclosure of accreditation, State and CMS validation inspection results.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... validation inspection results. 493.571 Section 493.571 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES... Program § 493.571 Disclosure of accreditation, State and CMS validation inspection results. (a... licensure program, in accordance with State law. (c) CMS validation inspection results. CMS may disclose the...

  9. 42 CFR 493.571 - Disclosure of accreditation, State and CMS validation inspection results.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... validation inspection results. 493.571 Section 493.571 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES... Program § 493.571 Disclosure of accreditation, State and CMS validation inspection results. (a... licensure program, in accordance with State law. (c) CMS validation inspection results. CMS may disclose the...

  10. 42 CFR 493.571 - Disclosure of accreditation, State and CMS validation inspection results.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... validation inspection results. 493.571 Section 493.571 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES... Program § 493.571 Disclosure of accreditation, State and CMS validation inspection results. (a... licensure program, in accordance with State law. (c) CMS validation inspection results. CMS may disclose the...

  11. 42 CFR 493.571 - Disclosure of accreditation, State and CMS validation inspection results.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... validation inspection results. 493.571 Section 493.571 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES... Program § 493.571 Disclosure of accreditation, State and CMS validation inspection results. (a... licensure program, in accordance with State law. (c) CMS validation inspection results. CMS may disclose the...

  12. 42 CFR 493.571 - Disclosure of accreditation, State and CMS validation inspection results.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... validation inspection results. 493.571 Section 493.571 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES... Program § 493.571 Disclosure of accreditation, State and CMS validation inspection results. (a... licensure program, in accordance with State law. (c) CMS validation inspection results. CMS may disclose the...

  13. 10 CFR 26.139 - Reporting initial validity and drug test results.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 1 2011-01-01 2011-01-01 false Reporting initial validity and drug test results. 26.139 Section 26.139 Energy NUCLEAR REGULATORY COMMISSION FITNESS FOR DUTY PROGRAMS Licensee Testing Facilities § 26.139 Reporting initial validity and drug test results. (a) The licensee testing facility shall...

  14. 10 CFR 26.139 - Reporting initial validity and drug test results.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Reporting initial validity and drug test results. 26.139 Section 26.139 Energy NUCLEAR REGULATORY COMMISSION FITNESS FOR DUTY PROGRAMS Licensee Testing Facilities § 26.139 Reporting initial validity and drug test results. (a) The licensee testing facility shall...

  15. 10 CFR 26.139 - Reporting initial validity and drug test results.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 1 2013-01-01 2013-01-01 false Reporting initial validity and drug test results. 26.139 Section 26.139 Energy NUCLEAR REGULATORY COMMISSION FITNESS FOR DUTY PROGRAMS Licensee Testing Facilities § 26.139 Reporting initial validity and drug test results. (a) The licensee testing facility shall...

  16. Exploring discrepancies between quantitative validation results and the geomorphic plausibility of statistical landslide susceptibility maps

    NASA Astrophysics Data System (ADS)

    Steger, Stefan; Brenning, Alexander; Bell, Rainer; Petschko, Helene; Glade, Thomas

    2016-06-01

    Empirical models are frequently applied to produce landslide susceptibility maps for large areas. Subsequent quantitative validation results are routinely used as the primary criteria to infer the validity and applicability of the final maps or to select one of several models. This study hypothesizes that such direct deductions can be misleading. The main objective was to explore discrepancies between the predictive performance of a landslide susceptibility model and the geomorphic plausibility of subsequent landslide susceptibility maps while a particular emphasis was placed on the influence of incomplete landslide inventories on modelling and validation results. The study was conducted within the Flysch Zone of Lower Austria (1,354 km2) which is known to be highly susceptible to landslides of the slide-type movement. Sixteen susceptibility models were generated by applying two statistical classifiers (logistic regression and generalized additive model) and two machine learning techniques (random forest and support vector machine) separately for two landslide inventories of differing completeness and two predictor sets. The results were validated quantitatively by estimating the area under the receiver operating characteristic curve (AUROC) with single holdout and spatial cross-validation technique. The heuristic evaluation of the geomorphic plausibility of the final results was supported by findings of an exploratory data analysis, an estimation of odds ratios and an evaluation of the spatial structure of the final maps. The results showed that maps generated by different inventories, classifiers and predictors appeared differently while holdout validation revealed similar high predictive performances. Spatial cross-validation proved useful to expose spatially varying inconsistencies of the modelling results while additionally providing evidence for slightly overfitted machine learning-based models. However, the highest predictive performances were obtained for

  17. Results and current status of the NPARC alliance validation effort

    NASA Technical Reports Server (NTRS)

    Towne, Charles E.; Jones, Ralph R.

    1996-01-01

    The NPARC Alliance is a partnership between the NASA Lewis Research Center (LeRC) and the USAF Arnold Engineering Development Center (AEDC) dedicated to the establishment of a national CFD capability, centered on the NPARC Navier-Stokes computer program. The three main tasks of the Alliance are user support, code development, and validation. The present paper is a status report on the validation effort. It describes the validation approach being taken by the Alliance. Representative results are presented for laminar and turbulent flat plate boundary layers, a supersonic axisymmetric jet, and a glancing shock/turbulent boundary layer interaction. Cases scheduled to be run in the future are also listed. The archive of validation cases is described, including information on how to access it via the Internet.

  18. Validity of maximal isometric knee extension strength measurements obtained via belt-stabilized hand-held dynamometry in healthy adults.

    PubMed

    Ushiyama, Naoko; Kurobe, Yasushi; Momose, Kimito

    2017-11-01

    [Purpose] To determine the validity of knee extension muscle strength measurements using belt-stabilized hand-held dynamometry with and without body stabilization compared with the gold standard isokinetic dynamometry in healthy adults. [Subjects and Methods] Twenty-nine healthy adults (mean age, 21.3 years) were included. Study parameters involved right side measurements of maximal isometric knee extension strength obtained using belt-stabilized hand-held dynamometry with and without body stabilization and the gold standard. Measurements were performed in all subjects. [Results] A moderate correlation and fixed bias were found between measurements obtained using belt-stabilized hand-held dynamometry with body stabilization and the gold standard. No significant correlation and proportional bias were found between measurements obtained using belt-stabilized hand-held dynamometry without body stabilization and the gold standard. The strength identified using belt-stabilized hand-held dynamometry with body stabilization may not be commensurate with the maximum strength individuals can generate; however, it reflects such strength. In contrast, the strength identified using belt-stabilized hand-held dynamometry without body stabilization does not reflect the maximum strength. Therefore, a chair should be used to stabilize the body when performing measurements of maximal isometric knee extension strength using belt-stabilized hand-held dynamometry in healthy adults. [Conclusion] Belt-stabilized hand-held dynamometry with body stabilization is more convenient than the gold standard in clinical settings.

  19. Comparison of Anaerobic Susceptibility Results Obtained by Different Methods

    PubMed Central

    Rosenblatt, J. E.; Murray, P. R.; Sonnenwirth, A. C.; Joyce, J. L.

    1979-01-01

    Susceptibility tests using 7 antimicrobial agents (carbenicillin, chloramphenicol, clindamycin, penicillin, cephalothin, metronidazole, and tetracycline) were run against 35 anaerobes including Bacteroides fragilis (17), other gram-negative bacilli (7), clostridia (5), peptococci (4), and eubacteria (2). Results in triplicate obtained by the microbroth dilution method and the aerobic modification of the broth disk method were compared with those obtained with an agar dilution method using Wilkins-Chalgren agar. Media used in the microbroth dilution method included Wilkins-Chalgren broth, brain heart infusion broth, brucella broth, tryptic soy broth, thioglycolate broth, and Schaedler's broth. A result differing by more than one dilution from the Wilkins-Chalgren agar result was considered a discrepancy, and when there was a change in susceptibility status this was termed a significant discrepancy. The microbroth dilution method using Wilkins-Chalgren broth and thioglycolate broth produced the fewest total discrepancies (22 and 24, respectively), and Wilkins-Chalgren broth, thioglycolate, and Schaedler's broth had the fewest significant discrepancies (6, 5, and 5, respectively). With the broth disk method, there were 15 significant discrepancies, although half of these were with tetracycline, which was the antimicrobial agent associated with the highest number of significant discrepancies (33), considering all of the test methods and media. PMID:464560

  20. 42 CFR 478.15 - QIO review of changes resulting from DRG validation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false QIO review of changes resulting from DRG validation... review of changes resulting from DRG validation. (a) General rules. (1) A provider or practitioner dissatisfied with a change to the diagnostic or procedural coding information made by a QIO as a result of DRG...

  1. Helicopter simulation validation using flight data

    NASA Technical Reports Server (NTRS)

    Key, D. L.; Hansen, R. S.; Cleveland, W. B.; Abbott, W. Y.

    1982-01-01

    A joint NASA/Army effort to perform a systematic ground-based piloted simulation validation assessment is described. The best available mathematical model for the subject helicopter (UH-60A Black Hawk) was programmed for real-time operation. Flight data were obtained to validate the math model, and to develop models for the pilot control strategy while performing mission-type tasks. The validated math model is to be combined with motion and visual systems to perform ground based simulation. Comparisons of the control strategy obtained in flight with that obtained on the simulator are to be used as the basis for assessing the fidelity of the results obtained in the simulator.

  2. An improved and validated RNA HLA class I SBT approach for obtaining full length coding sequences.

    PubMed

    Gerritsen, K E H; Olieslagers, T I; Groeneweg, M; Voorter, C E M; Tilanus, M G J

    2014-11-01

    The functional relevance of human leukocyte antigen (HLA) class I allele polymorphism beyond exons 2 and 3 is difficult to address because more than 70% of the HLA class I alleles are defined by exons 2 and 3 sequences only. For routine application on clinical samples we improved and validated the HLA sequence-based typing (SBT) approach based on RNA templates, using either a single locus-specific or two overlapping group-specific polymerase chain reaction (PCR) amplifications, with three forward and three reverse sequencing reactions for full length sequencing. Locus-specific HLA typing with RNA SBT of a reference panel, representing the major antigen groups, showed identical results compared to DNA SBT typing. Alleles encountered with unknown exons in the IMGT/HLA database and three samples, two with Null and one with a Low expressed allele, have been addressed by the group-specific RNA SBT approach to obtain full length coding sequences. This RNA SBT approach has proven its value in our routine full length definition of alleles. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  3. Evaluation of a statistics-based Ames mutagenicity QSAR model and interpretation of the results obtained.

    PubMed

    Barber, Chris; Cayley, Alex; Hanser, Thierry; Harding, Alex; Heghes, Crina; Vessey, Jonathan D; Werner, Stephane; Weiner, Sandy K; Wichard, Joerg; Giddings, Amanda; Glowienke, Susanne; Parenty, Alexis; Brigo, Alessandro; Spirkl, Hans-Peter; Amberg, Alexander; Kemper, Ray; Greene, Nigel

    2016-04-01

    The relative wealth of bacterial mutagenicity data available in the public literature means that in silico quantitative/qualitative structure activity relationship (QSAR) systems can readily be built for this endpoint. A good means of evaluating the performance of such systems is to use private unpublished data sets, which generally represent a more distinct chemical space than publicly available test sets and, as a result, provide a greater challenge to the model. However, raw performance metrics should not be the only factor considered when judging this type of software since expert interpretation of the results obtained may allow for further improvements in predictivity. Enough information should be provided by a QSAR to allow the user to make general, scientifically-based arguments in order to assess and overrule predictions when necessary. With all this in mind, we sought to validate the performance of the statistics-based in vitro bacterial mutagenicity prediction system Sarah Nexus (version 1.1) against private test data sets supplied by nine different pharmaceutical companies. The results of these evaluations were then analysed in order to identify findings presented by the model which would be useful for the user to take into consideration when interpreting the results and making their final decision about the mutagenic potential of a given compound. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. JaCVAM-organized international validation study of the in vivo rodent alkaline comet assay for detection of genotoxic carcinogens: II. Summary of definitive validation study results.

    PubMed

    Uno, Yoshifumi; Kojima, Hajime; Omori, Takashi; Corvi, Raffaella; Honma, Masamistu; Schechtman, Leonard M; Tice, Raymond R; Beevers, Carol; De Boeck, Marlies; Burlinson, Brian; Hobbs, Cheryl A; Kitamoto, Sachiko; Kraynak, Andrew R; McNamee, James; Nakagawa, Yuzuki; Pant, Kamala; Plappert-Helbig, Ulla; Priestley, Catherine; Takasawa, Hironao; Wada, Kunio; Wirnitzer, Uta; Asano, Norihide; Escobar, Patricia A; Lovell, David; Morita, Takeshi; Nakajima, Madoka; Ohno, Yasuo; Hayashi, Makoto

    2015-07-01

    The in vivo rodent alkaline comet assay (comet assay) is used internationally to investigate the in vivo genotoxic potential of test chemicals. This assay, however, has not previously been formally validated. The Japanese Center for the Validation of Alternative Methods (JaCVAM), with the cooperation of the U.S. NTP Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM)/the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), the European Centre for the Validation of Alternative Methods (ECVAM), and the Japanese Environmental Mutagen Society/Mammalian Mutagenesis Study Group (JEMS/MMS), organized an international validation study to evaluate the reliability and relevance of the assay for identifying genotoxic carcinogens, using liver and stomach as target organs. The ultimate goal of this exercise was to establish an Organisation for Economic Co-operation and Development (OECD) test guideline. The study protocol was optimized in the pre-validation studies, and then the definitive (4th phase) validation study was conducted in two steps. In the 1st step, assay reproducibility was confirmed among laboratories using four coded reference chemicals and the positive control ethyl methanesulfonate. In the 2nd step, the predictive capability was investigated using 40 coded chemicals with known genotoxic and carcinogenic activity (i.e., genotoxic carcinogens, genotoxic non-carcinogens, non-genotoxic carcinogens, and non-genotoxic non-carcinogens). Based on the results obtained, the in vivo comet assay is concluded to be highly capable of identifying genotoxic chemicals and therefore can serve as a reliable predictor of rodent carcinogenicity. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Results and Validation of MODIS Aerosol Retrievals Over Land and Ocean

    NASA Technical Reports Server (NTRS)

    Remer, Lorraine; Einaudi, Franco (Technical Monitor)

    2001-01-01

    The MODerate Resolution Imaging Spectroradiometer (MODIS) instrument aboard the Terra spacecraft has been retrieving aerosol parameters since late February 2000. Initial qualitative checking of the products showed very promising results including matching of land and ocean retrievals at coastlines. Using AERONET ground-based radiometers as our primary validation tool, we have established quantitative validation as well. Our results show that for most aerosol types, the MODIS products fall within the pre-launch estimated uncertainties. Surface reflectance and aerosol model assumptions appear to be sufficiently accurate for the optical thickness retrieval. Dust provides a possible exception, which may be due to non-spherical effects. Over ocean the MODIS products include information on particle size, and these parameters are also validated with AERONET retrievals.

  6. Results and Validation of MODIS Aerosol Retrievals over Land and Ocean

    NASA Technical Reports Server (NTRS)

    Remer, L. A.; Kaufman, Y. J.; Tanre, D.; Ichoku, C.; Chu, D. A.; Mattoo, S.; Levy, R.; Martins, J. V.; Li, R.-R.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    The MODerate Resolution Imaging Spectroradiometer (MODIS) instrument aboard the Terra spacecraft has been retrieving aerosol parameters since late February 2000. Initial qualitative checking of the products showed very promising results including matching of land and ocean retrievals at coastlines. Using AERONET ground-based radiometers as our primary validation tool, we have established quantitative validation as well. Our results show that for most aerosol types, the MODIS products fall within the pre-launch estimated uncertainties. Surface reflectance and aerosol model assumptions appear to be sufficiently accurate for the optical thickness retrieval. Dust provides a possible exception, which may be due to non-spherical effects. Over ocean the MODIS products include information on particle size, and these parameters are also validated with AERONET retrievals.

  7. Optimization and validation of moving average quality control procedures using bias detection curves and moving average validation charts.

    PubMed

    van Rossum, Huub H; Kemperman, Hans

    2017-02-01

    To date, no practical tools are available to obtain optimal settings for moving average (MA) as a continuous analytical quality control instrument. Also, there is no knowledge of the true bias detection properties of applied MA. We describe the use of bias detection curves for MA optimization and MA validation charts for validation of MA. MA optimization was performed on a data set of previously obtained consecutive assay results. Bias introduction and MA bias detection were simulated for multiple MA procedures (combination of truncation limits, calculation algorithms and control limits) and performed for various biases. Bias detection curves were generated by plotting the median number of test results needed for bias detection against the simulated introduced bias. In MA validation charts the minimum, median, and maximum numbers of assay results required for MA bias detection are shown for various bias. Their use was demonstrated for sodium, potassium, and albumin. Bias detection curves allowed optimization of MA settings by graphical comparison of bias detection properties of multiple MA. The optimal MA was selected based on the bias detection characteristics obtained. MA validation charts were generated for selected optimal MA and provided insight into the range of results required for MA bias detection. Bias detection curves and MA validation charts are useful tools for optimization and validation of MA procedures.

  8. Some Results on Sea Ice Rheology for the Seasonal Ice Zone, Obtained from the Deformation Field of Sea Ice Drift Pattern

    NASA Astrophysics Data System (ADS)

    Toyota, T.; Kimura, N.

    2017-12-01

    Sea ice rheology which relates sea ice stress to the large-scale deformation of the ice cover has been a big issue to numerical sea ice modelling. At present the treatment of internal stress within sea ice area is based mostly on the rheology formulated by Hibler (1979), where the whole sea ice area behaves like an isotropic and plastic matter under the ordinary stress with the yield curve given by an ellipse with an aspect ratio (e) of 2, irrespective of sea ice area and horizontal resolution of the model. However, this formulation was initially developed to reproduce the seasonal variation of the perennial ice in the Arctic Ocean. As for its applicability to the seasonal ice zones (SIZ), where various types of sea ice are present, it still needs validation from observational data. In this study, the validity of this rheology was examined for the Sea of Okhotsk ice, typical of the SIZ, based on the AMSR-derived ice drift pattern in comparison with the result obtained for the Beaufort Sea. To examine the dependence on a horizontal scale, the coastal radar data operated near the Hokkaido coast, Japan, were also used. Ice drift pattern was obtained by a maximum cross-correlation method with grid spacings of 37.5 km from the 89 GHz brightness temperature of AMSR-E for the entire Sea of Okhotsk and the Beaufort Sea and 1.3 km from the coastal radar for the near-shore Sea of Okhotsk. The validity of this rheology was investigated from a standpoint of work rate done by deformation field, following the theory of Rothrock (1975). In analysis, the relative rates of convergence were compared between theory and observation to check the shape of yield curve, and the strain ellipse at each grid cell was estimated to see the horizontal variation of deformation field. The result shows that the ellipse of e=1.7-2.0 as the yield curve represents the observed relative conversion rates well for all the ice areas. Since this result corresponds with the yield criterion by Tresca and

  9. Examinee Noneffort and the Validity of Program Assessment Results

    ERIC Educational Resources Information Center

    Wise, Steven L.; DeMars, Christine E.

    2010-01-01

    Educational program assessment studies often use data from low-stakes tests to provide evidence of program quality. The validity of scores from such tests, however, is potentially threatened by examinee noneffort. This study investigated the extent to which one type of noneffort--rapid-guessing behavior--distorted the results from three types of…

  10. Validation Test Results for Orthogonal Probe Eddy Current Thruster Inspection System

    NASA Technical Reports Server (NTRS)

    Wincheski, Russell A.

    2007-01-01

    Recent nondestructive evaluation efforts within NASA have focused on an inspection system for the detection of intergranular cracking originating in the relief radius of Primary Reaction Control System (PCRS) Thrusters. Of particular concern is deep cracking in this area which could lead to combustion leakage in the event of through wall cracking from the relief radius into an acoustic cavity of the combustion chamber. In order to reliably detect such defects while ensuring minimal false positives during inspection, the Orthogonal Probe Eddy Current (OPEC) system has been developed and an extensive validation study performed. This report describes the validation procedure, sample set, and inspection results as well as comparing validation flaws with the response from naturally occuring damage.

  11. Ride qualities criteria validation/pilot performance study: Flight test results

    NASA Technical Reports Server (NTRS)

    Nardi, L. U.; Kawana, H. Y.; Greek, D. C.

    1979-01-01

    Pilot performance during a terrain following flight was studied for ride quality criteria validation. Data from manual and automatic terrain following operations conducted during low level penetrations were analyzed to determine the effect of ride qualities on crew performance. The conditions analyzed included varying levels of turbulence, terrain roughness, and mission duration with a ride smoothing system on and off. Limited validation of the B-1 ride quality criteria and some of the first order interactions between ride qualities and pilot/vehicle performance are highlighted. An earlier B-1 flight simulation program correlated well with the flight test results.

  12. 42 CFR 493.569 - Consequences of a finding of noncompliance as a result of a validation inspection.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... result of a validation inspection. 493.569 Section 493.569 Public Health CENTERS FOR MEDICARE & MEDICAID... validation inspection. (a) Laboratory with a certificate of accreditation. If a validation inspection results... validation inspection results in a finding that a CLIA-exempt laboratory is out of compliance with one or...

  13. 42 CFR 493.569 - Consequences of a finding of noncompliance as a result of a validation inspection.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... result of a validation inspection. 493.569 Section 493.569 Public Health CENTERS FOR MEDICARE & MEDICAID... validation inspection. (a) Laboratory with a certificate of accreditation. If a validation inspection results... validation inspection results in a finding that a CLIA-exempt laboratory is out of compliance with one or...

  14. 42 CFR 493.569 - Consequences of a finding of noncompliance as a result of a validation inspection.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... result of a validation inspection. 493.569 Section 493.569 Public Health CENTERS FOR MEDICARE & MEDICAID... validation inspection. (a) Laboratory with a certificate of accreditation. If a validation inspection results... validation inspection results in a finding that a CLIA-exempt laboratory is out of compliance with one or...

  15. 42 CFR 493.569 - Consequences of a finding of noncompliance as a result of a validation inspection.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... result of a validation inspection. 493.569 Section 493.569 Public Health CENTERS FOR MEDICARE & MEDICAID... validation inspection. (a) Laboratory with a certificate of accreditation. If a validation inspection results... validation inspection results in a finding that a CLIA-exempt laboratory is out of compliance with one or...

  16. 42 CFR 493.569 - Consequences of a finding of noncompliance as a result of a validation inspection.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... result of a validation inspection. 493.569 Section 493.569 Public Health CENTERS FOR MEDICARE & MEDICAID... validation inspection. (a) Laboratory with a certificate of accreditation. If a validation inspection results... validation inspection results in a finding that a CLIA-exempt laboratory is out of compliance with one or...

  17. ExEP yield modeling tool and validation test results

    NASA Astrophysics Data System (ADS)

    Morgan, Rhonda; Turmon, Michael; Delacroix, Christian; Savransky, Dmitry; Garrett, Daniel; Lowrance, Patrick; Liu, Xiang Cate; Nunez, Paul

    2017-09-01

    EXOSIMS is an open-source simulation tool for parametric modeling of the detection yield and characterization of exoplanets. EXOSIMS has been adopted by the Exoplanet Exploration Programs Standards Definition and Evaluation Team (ExSDET) as a common mechanism for comparison of exoplanet mission concept studies. To ensure trustworthiness of the tool, we developed a validation test plan that leverages the Python-language unit-test framework, utilizes integration tests for selected module interactions, and performs end-to-end crossvalidation with other yield tools. This paper presents the test methods and results, with the physics-based tests such as photometry and integration time calculation treated in detail and the functional tests treated summarily. The test case utilized a 4m unobscured telescope with an idealized coronagraph and an exoplanet population from the IPAC radial velocity (RV) exoplanet catalog. The known RV planets were set at quadrature to allow deterministic validation of the calculation of physical parameters, such as working angle, photon counts and integration time. The observing keepout region was tested by generating plots and movies of the targets and the keepout zone over a year. Although the keepout integration test required the interpretation of a user, the test revealed problems in the L2 halo orbit and the parameterization of keepout applied to some solar system bodies, which the development team was able to address. The validation testing of EXOSIMS was performed iteratively with the developers of EXOSIMS and resulted in a more robust, stable, and trustworthy tool that the exoplanet community can use to simulate exoplanet direct-detection missions from probe class, to WFIRST, up to large mission concepts such as HabEx and LUVOIR.

  18. Assessing Procedural Competence: Validity Considerations.

    PubMed

    Pugh, Debra M; Wood, Timothy J; Boulet, John R

    2015-10-01

    Simulation-based medical education (SBME) offers opportunities for trainees to learn how to perform procedures and to be assessed in a safe environment. However, SBME research studies often lack robust evidence to support the validity of the interpretation of the results obtained from tools used to assess trainees' skills. The purpose of this paper is to describe how a validity framework can be applied when reporting and interpreting the results of a simulation-based assessment of skills related to performing procedures. The authors discuss various sources of validity evidence because they relate to SBME. A case study is presented.

  19. Validation Results for LEWICE 2.0. [Supplement

    NASA Technical Reports Server (NTRS)

    Wright, William B.; Rutkowski, Adam

    1999-01-01

    Two CD-ROMs contain experimental ice shapes and code prediction used for validation of LEWICE 2.0 (see NASA/CR-1999-208690, CASI ID 19990021235). The data include ice shapes for both experiment and for LEWICE, all of the input and output files for the LEWICE cases, JPG files of all plots generated, an electronic copy of the text of the validation report, and a Microsoft Excel(R) spreadsheet containing all of the quantitative measurements taken. The LEWICE source code and executable are not contained on the discs.

  20. Validation of the concentration profiles obtained from the near infrared/multivariate curve resolution monitoring of reactions of epoxy resins using high performance liquid chromatography as a reference method.

    PubMed

    Garrido, M; Larrechi, M S; Rius, F X

    2007-03-07

    This paper reports the validation of the results obtained by combining near infrared spectroscopy and multivariate curve resolution-alternating least squares (MCR-ALS) and using high performance liquid chromatography as a reference method, for the model reaction of phenylglycidylether (PGE) and aniline. The results are obtained as concentration profiles over the reaction time. The trueness of the proposed method has been evaluated in terms of lack of bias. The joint test for the intercept and the slope showed that there were no significant differences between the profiles calculated spectroscopically and the ones obtained experimentally by means of the chromatographic reference method at an overall level of confidence of 5%. The uncertainty of the results was estimated by using information derived from the process of assessment of trueness. Such operational aspects as the cost and availability of instrumentation and the length and cost of the analysis were evaluated. The method proposed is a good way of monitoring the reactions of epoxy resins, and it adequately shows how the species concentration varies over time.

  1. Validation environment for AIPS/ALS: Implementation and results

    NASA Technical Reports Server (NTRS)

    Segall, Zary; Siewiorek, Daniel; Caplan, Eddie; Chung, Alan; Czeck, Edward; Vrsalovic, Dalibor

    1990-01-01

    The work is presented which was performed in porting the Fault Injection-based Automated Testing (FIAT) and Programming and Instrumentation Environments (PIE) validation tools, to the Advanced Information Processing System (AIPS) in the context of the Ada Language System (ALS) application, as well as an initial fault free validation of the available AIPS system. The PIE components implemented on AIPS provide the monitoring mechanisms required for validation. These mechanisms represent a substantial portion of the FIAT system. Moreover, these are required for the implementation of the FIAT environment on AIPS. Using these components, an initial fault free validation of the AIPS system was performed. The implementation is described of the FIAT/PIE system, configured for fault free validation of the AIPS fault tolerant computer system. The PIE components were modified to support the Ada language. A special purpose AIPS/Ada runtime monitoring and data collection was implemented. A number of initial Ada programs running on the PIE/AIPS system were implemented. The instrumentation of the Ada programs was accomplished automatically inside the PIE programming environment. PIE's on-line graphical views show vividly and accurately the performance characteristics of Ada programs, AIPS kernel and the application's interaction with the AIPS kernel. The data collection mechanisms were written in a high level language, Ada, and provide a high degree of flexibility for implementation under various system conditions.

  2. Validation results of satellite mock-up capturing experiment using nets

    NASA Astrophysics Data System (ADS)

    Medina, Alberto; Cercós, Lorenzo; Stefanescu, Raluca M.; Benvenuto, Riccardo; Pesce, Vincenzo; Marcon, Marco; Lavagna, Michèle; González, Iván; Rodríguez López, Nuria; Wormnes, Kjetil

    2017-05-01

    The PATENDER activity (Net parametric characterization and parabolic flight), funded by the European Space Agency (ESA) via its Clean Space initiative, was aiming to validate a simulation tool for designing nets for capturing space debris. This validation has been performed through a set of different experiments under microgravity conditions where a net was launched capturing and wrapping a satellite mock-up. This paper presents the architecture of the thrown-net dynamics simulator together with the set-up of the deployment experiment and its trajectory reconstruction results on a parabolic flight (Novespace A-310, June 2015). The simulator has been implemented within the Blender framework in order to provide a highly configurable tool, able to reproduce different scenarios for Active Debris Removal missions. The experiment has been performed over thirty parabolas offering around 22 s of zero-g conditions. Flexible meshed fabric structure (the net) ejected from a container and propelled by corner masses (the bullets) arranged around its circumference have been launched at different initial velocities and launching angles using a pneumatic-based dedicated mechanism (representing the chaser satellite) against a target mock-up (the target satellite). High-speed motion cameras were recording the experiment allowing 3D reconstruction of the net motion. The net knots have been coloured to allow the images post-process using colour segmentation, stereo matching and iterative closest point (ICP) for knots tracking. The final objective of the activity was the validation of the net deployment and wrapping simulator using images recorded during the parabolic flight. The high-resolution images acquired have been post-processed to determine accurately the initial conditions and generate the reference data (position and velocity of all knots of the net along its deployment and wrapping of the target mock-up) for the simulator validation. The simulator has been properly

  3. Functional performance following selective posterior rhizotomy: long-term results determined using a validated evaluative measure.

    PubMed

    Mittal, Sandeep; Farmer, Jean-Pierre; Al-Atassi, Borhan; Montpetit, Kathleen; Gervais, Nathalie; Poulin, Chantal; Benaroch, Thierry E; Cantin, Marie-André

    2002-09-01

    Selective posterior rhizotomy (SPR) may result in considerable benefit for children with spastic cerebral palsy. To date, however, there have been few studies in which validated functional outcome measures have been used to report surgical results beyond 3 years. The authors analyzed data obtained from the McGill Rhizotomy Database to determine long-term functional performance outcomes in patients who underwent lumbosacral dorsal rhizotomy performed using intraoperative electrophysiological monitoring. The study population was composed of children with debilitating spasticity who underwent SPR and were evaluated by a multidisciplinary team preoperatively and at 6 months and 1 year postoperatively. Quantitative standardized assessments of activities of daily living (ADL) were obtained using the Pediatric Evaluation of Disability Inventory (PEDI). Of 57 patients who met the entry criteria for the study, 41 completed the 3-year assessments and 30 completed the 5-year assessments. Statistical analysis demonstrated significant improvement in the mobility and self-care domains of the functional skills dimension at 1 year after SPR. The preoperative and 1-, 3-, and 5-year postoperative scaled scores for the mobility domain were 56, 64, 77.2, and 77.8, respectively. The scaled score for the self-care domain increased from 59 presurgery to 67.9, 81.6, and 82.4 at the 1-, 3-, and 5-year postoperative assessments, respectively. The results of this study support the presence of significant improvements in functional performance, based on PEDI scores obtained 1 year after SPR. The improvements persisted at the 3- and 5-year follow-up examinations. The authors conclude that SPR performed using intraoperative stimulation is valuable in the augmentation of motor function and self-care skills essential to the performance of ADL.

  4. Validating Facial Aesthetic Surgery Results with the FACE-Q.

    PubMed

    Kappos, Elisabeth A; Temp, Mathias; Schaefer, Dirk J; Haug, Martin; Kalbermatten, Daniel F; Toth, Bryant A

    2017-04-01

    In aesthetic clinical practice, surgical outcome is best measured by patient satisfaction and quality of life. For many years, there has been a lack of validated questionnaires. Recently, the FACE-Q was introduced, and the authors present the largest series of face-lift patients evaluated by the FACE-Q with the longest follow-up to date. Two hundred consecutive patients were identified who underwent high-superficial musculoaponeurotic system face lifts, with or without additional facial rejuvenation procedures, between January of 2005 and January of 2015. Patients were sent eight FACE-Q scales and were asked to answer questions with regard to their satisfaction. Rank analysis of covariance was used to compare different subgroups. The response rate was 38 percent. Combination of face lift with other procedures resulted in higher satisfaction than face lift alone (p < 0.05). Patients who underwent lipofilling as part of their face lift showed higher satisfaction than patients without lipofilling in three subscales (p < 0.05). Facial rejuvenation surgery, combining a high-superficial musculoaponeurotic system face lift with lipofilling and/or other facial rejuvenation procedures, resulted in a high level of patient satisfaction. The authors recommend the implementation of the FACE-Q by physicians involved in aesthetic facial surgery, to validate their clinical outcomes from a patient's perspective.

  5. Biomechanical stress maps of an artificial femur obtained using a new infrared thermography technique validated by strain gages.

    PubMed

    Shah, Suraj; Bougherara, Habiba; Schemitsch, Emil H; Zdero, Rad

    2012-12-01

    Femurs are the heaviest, longest, and strongest long bones in the human body and are routinely subjected to cyclic forces. Strain gages are commonly employed to experimentally validate finite element models of the femur in order to generate 3D stresses, yet there is little information on a relatively new infrared (IR) thermography technique now available for biomechanics applications. In this study, IR thermography validated with strain gages was used to measure the principal stresses in the artificial femur model from Sawbones (Vashon, WA, USA) increasingly being used for biomechanical research. The femur was instrumented with rosette strain gages and mechanically tested using average axial cyclic forces of 1500 N, 1800 N, and 2100 N, representing 3 times body weight for a 50 kg, 60 kg, and 70 kg person. The femur was oriented at 7° of adduction to simulate the single-legged stance phase of walking. Stress maps were also obtained using an IR thermography camera. Results showed good agreement of IR thermography vs. strain gage data with a correlation of R(2)=0.99 and a slope=1.08 for the straight line of best fit. IR thermography detected the highest principal stresses on the superior-posterior side of the neck, which yielded compressive values of -91.2 MPa (at 1500 N), -96.0 MPa (at 1800 N), and -103.5 MPa (at 2100 N). There was excellent correlation between IR thermography principal stress vs. axial cyclic force at 6 locations on the femur on the lateral (R(2)=0.89-0.99), anterior (R(2)=0.87-0.99), and posterior (R(2)=0.81-0.99) sides. This study shows IR thermography's potential for future biomechanical applications. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.

  6. Validation of spatial variability in downscaling results from the VALUE perfect predictor experiment

    NASA Astrophysics Data System (ADS)

    Widmann, Martin; Bedia, Joaquin; Gutiérrez, Jose Manuel; Maraun, Douglas; Huth, Radan; Fischer, Andreas; Keller, Denise; Hertig, Elke; Vrac, Mathieu; Wibig, Joanna; Pagé, Christian; Cardoso, Rita M.; Soares, Pedro MM; Bosshard, Thomas; Casado, Maria Jesus; Ramos, Petra

    2016-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research. Within VALUE a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods has been developed. In the first validation experiment the downscaling methods are validated in a setup with perfect predictors taken from the ERA-interim reanalysis for the period 1997 - 2008. This allows to investigate the isolated skill of downscaling methods without further error contributions from the large-scale predictors. One aspect of the validation is the representation of spatial variability. As part of the VALUE validation we have compared various properties of the spatial variability of downscaled daily temperature and precipitation with the corresponding properties in observations. We have used two test validation datasets, one European-wide set of 86 stations, and one higher-density network of 50 stations in Germany. Here we present results based on three approaches, namely the analysis of i.) correlation matrices, ii.) pairwise joint threshold exceedances, and iii.) regions of similar variability. We summarise the information contained in correlation matrices by calculating the dependence of the correlations on distance and deriving decorrelation lengths, as well as by determining the independent degrees of freedom. Probabilities for joint threshold exceedances and (where appropriate) non-exceedances are calculated for various user-relevant thresholds related for instance to extreme precipitation or frost and heat days. The dependence of these probabilities on distance is again characterised by calculating typical length scales that separate dependent from independent exceedances. Regionalisation is based on rotated Principal Component Analysis. The results indicate which downscaling methods are preferable if the dependency of variability at different locations is relevant for the user.

  7. Validation Results for LEWICE 3.0

    NASA Technical Reports Server (NTRS)

    Wright, William B.

    2005-01-01

    A research project is underway at NASA Glenn to produce computer software that can accurately predict ice growth under any meteorological conditions for any aircraft surface. This report will present results from version 3.0 of this software, which is called LEWICE. This version differs from previous releases in that it incorporates additional thermal analysis capabilities, a pneumatic boot model, interfaces to computational fluid dynamics (CFD) flow solvers and has an empirical model for the supercooled large droplet (SLD) regime. An extensive comparison of the results in a quantifiable manner against the database of ice shapes and collection efficiency that have been generated in the NASA Glenn Icing Research Tunnel (IRT) has also been performed. The complete set of data used for this comparison will eventually be available in a contractor report. This paper will show the differences in collection efficiency between LEWICE 3.0 and experimental data. Due to the large amount of validation data available, a separate report is planned for ice shape comparison. This report will first describe the LEWICE 3.0 model for water collection. A semi-empirical approach was used to incorporate first order physical effects of large droplet phenomena into icing software. Comparisons are then made to every single element two-dimensional case in the water collection database. Each condition was run using the following five assumptions: 1) potential flow, no splashing; 2) potential flow, no splashing with 21 bin drop size distributions and a lift correction (angle of attack adjustment); 3) potential flow, with splashing; 4) Navier-Stokes, no splashing; and 5) Navier-Stokes, with splashing. Quantitative comparisons are shown for impingement limit, maximum water catch, and total collection efficiency. The results show that the predicted results are within the accuracy limits of the experimental data for the majority of cases.

  8. Production of high-fidelity electropherograms results in improved and consistent DNA interpretation: Standardizing the forensic validation process.

    PubMed

    Peters, Kelsey C; Swaminathan, Harish; Sheehan, Jennifer; Duffy, Ken R; Lun, Desmond S; Grgicak, Catherine M

    2017-11-01

    Samples containing low-copy numbers of DNA are routinely encountered in casework. The signal acquired from these sample types can be difficult to interpret as they do not always contain all of the genotypic information from each contributor, where the loss of genetic information is associated with sampling and detection effects. The present work focuses on developing a validation scheme to aid in mitigating the effects of the latter. We establish a scheme designed to simultaneously improve signal resolution and detection rates without costly large-scale experimental validation studies by applying a combined simulation and experimental based approach. Specifically, we parameterize an in silico DNA pipeline with experimental data acquired from the laboratory and use this to evaluate multifarious scenarios in a cost-effective manner. Metrics such as signal 1copy -to-noise resolution, false positive and false negative signal detection rates are used to select tenable laboratory parameters that result in high-fidelity signal in the single-copy regime. We demonstrate that the metrics acquired from simulation are consistent with experimental data obtained from two capillary electrophoresis platforms and various injection parameters. Once good resolution is obtained, analytical thresholds can be determined using detection error tradeoff analysis, if necessary. Decreasing the limit of detection of the forensic process to one copy of DNA is a powerful mechanism by which to increase the information content on minor components of a mixture, which is particularly important for probabilistic system inference. If the forensic pipeline is engineered such that high-fidelity electropherogram signal is obtained, then the likelihood ratio (LR) of a true contributor increases and the probability that the LR of a randomly chosen person is greater than one decreases. This is, potentially, the first step towards standardization of the analytical pipeline across operational laboratories

  9. The Application of FT-IR Spectroscopy for Quality Control of Flours Obtained from Polish Producers

    PubMed Central

    Ceglińska, Alicja; Reder, Magdalena; Ciemniewska-Żytkiewicz, Hanna

    2017-01-01

    Samples of wheat, spelt, rye, and triticale flours produced by different Polish mills were studied by both classic chemical methods and FT-IR MIR spectroscopy. An attempt was made to statistically correlate FT-IR spectral data with reference data with regard to content of various components, for example, proteins, fats, ash, and fatty acids as well as properties such as moisture, falling number, and energetic value. This correlation resulted in calibrated and validated statistical models for versatile evaluation of unknown flour samples. The calibration data set was used to construct calibration models with use of the CSR and the PLS with the leave one-out, cross-validation techniques. The calibrated models were validated with a validation data set. The results obtained confirmed that application of statistical models based on MIR spectral data is a robust, accurate, precise, rapid, inexpensive, and convenient methodology for determination of flour characteristics, as well as for detection of content of selected flour ingredients. The obtained models' characteristics were as follows: R2 = 0.97, PRESS = 2.14; R2 = 0.96, PRESS = 0.69; R2 = 0.95, PRESS = 1.27; R2 = 0.94, PRESS = 0.76, for content of proteins, lipids, ash, and moisture level, respectively. Best results of CSR models were obtained for protein, ash, and crude fat (R2 = 0.86; 0.82; and 0.78, resp.). PMID:28243483

  10. Stability-based validation of dietary patterns obtained by cluster analysis.

    PubMed

    Sauvageot, Nicolas; Schritz, Anna; Leite, Sonia; Alkerwi, Ala'a; Stranges, Saverio; Zannad, Faiez; Streel, Sylvie; Hoge, Axelle; Donneau, Anne-Françoise; Albert, Adelin; Guillaume, Michèle

    2017-01-14

    Cluster analysis is a data-driven method used to create clusters of individuals sharing similar dietary habits. However, this method requires specific choices from the user which have an influence on the results. Therefore, there is a need of an objective methodology helping researchers in their decisions during cluster analysis. The objective of this study was to use such a methodology based on stability of clustering solutions to select the most appropriate clustering method and number of clusters for describing dietary patterns in the NESCAV study (Nutrition, Environment and Cardiovascular Health), a large population-based cross-sectional study in the Greater Region (N = 2298). Clustering solutions were obtained with K-means, K-medians and Ward's method and a number of clusters varying from 2 to 6. Their stability was assessed with three indices: adjusted Rand index, Cramer's V and misclassification rate. The most stable solution was obtained with K-means method and a number of clusters equal to 3. The "Convenient" cluster characterized by the consumption of convenient foods was the most prevalent with 46% of the population having this dietary behaviour. In addition, a "Prudent" and a "Non-Prudent" patterns associated respectively with healthy and non-healthy dietary habits were adopted by 25% and 29% of the population. The "Convenient" and "Non-Prudent" clusters were associated with higher cardiovascular risk whereas the "Prudent" pattern was associated with a decreased cardiovascular risk. Associations with others factors showed that the choice of a specific dietary pattern is part of a wider lifestyle profile. This study is of interest for both researchers and public health professionals. From a methodological standpoint, we showed that using stability of clustering solutions could help researchers in their choices. From a public health perspective, this study showed the need of targeted health promotion campaigns describing the benefits of healthy

  11. Earth Radiation Budget Experiment (ERBE) validation

    NASA Technical Reports Server (NTRS)

    Barkstrom, Bruce R.; Harrison, Edwin F.; Smith, G. Louis; Green, Richard N.; Kibler, James F.; Cess, Robert D.

    1990-01-01

    During the past 4 years, data from the Earth Radiation Budget Experiment (ERBE) have been undergoing detailed examination. There is no direct source of groundtruth for the radiation budget. Thus, this validation effort has had to rely heavily upon intercomparisons between different types of measurements. The ERBE SCIENCE Team chose 10 measures of agreement as validation criteria. Late in August 1988, the Team agreed that the data met these conditions. As a result, the final, monthly averaged data products are being archived. These products, their validation, and some results for January 1986 are described. Information is provided on obtaining the data from the archive.

  12. Ink dating using thermal desorption and gas chromatography/mass spectrometry: comparison of results obtained in two laboratories.

    PubMed

    Koenig, Agnès; Bügler, Jürgen; Kirsch, Dieter; Köhler, Fritz; Weyermann, Céline

    2015-01-01

    An ink dating method based on solvent analysis was recently developed using thermal desorption followed by gas chromatography/mass spectrometry (GC/MS) and is currently implemented in several forensic laboratories. The main aims of this work were to implement this method in a new laboratory to evaluate whether results were comparable at three levels: (i) validation criteria, (ii) aging curves, and (iii) results interpretation. While the results were indeed comparable in terms of validation, the method proved to be very sensitive to maintenances. Moreover, the aging curves were influenced by ink composition, as well as storage conditions (particularly when the samples were not stored in "normal" room conditions). Finally, as current interpretation models showed limitations, an alternative model based on slope calculation was proposed. However, in the future, a probabilistic approach may represent a better solution to deal with ink sample inhomogeneity. © 2014 American Academy of Forensic Science.

  13. Targeted estimation of nuisance parameters to obtain valid statistical inference.

    PubMed

    van der Laan, Mark J

    2014-01-01

    In order to obtain concrete results, we focus on estimation of the treatment specific mean, controlling for all measured baseline covariates, based on observing independent and identically distributed copies of a random variable consisting of baseline covariates, a subsequently assigned binary treatment, and a final outcome. The statistical model only assumes possible restrictions on the conditional distribution of treatment, given the covariates, the so-called propensity score. Estimators of the treatment specific mean involve estimation of the propensity score and/or estimation of the conditional mean of the outcome, given the treatment and covariates. In order to make these estimators asymptotically unbiased at any data distribution in the statistical model, it is essential to use data-adaptive estimators of these nuisance parameters such as ensemble learning, and specifically super-learning. Because such estimators involve optimal trade-off of bias and variance w.r.t. the infinite dimensional nuisance parameter itself, they result in a sub-optimal bias/variance trade-off for the resulting real-valued estimator of the estimand. We demonstrate that additional targeting of the estimators of these nuisance parameters guarantees that this bias for the estimand is second order and thereby allows us to prove theorems that establish asymptotic linearity of the estimator of the treatment specific mean under regularity conditions. These insights result in novel targeted minimum loss-based estimators (TMLEs) that use ensemble learning with additional targeted bias reduction to construct estimators of the nuisance parameters. In particular, we construct collaborative TMLEs (C-TMLEs) with known influence curve allowing for statistical inference, even though these C-TMLEs involve variable selection for the propensity score based on a criterion that measures how effective the resulting fit of the propensity score is in removing bias for the estimand. As a particular special

  14. Obtaining patient test results from clinical laboratories: a survey of state law for pharmacists.

    PubMed

    Witry, Matthew J; Doucette, William R

    2009-01-01

    To identify states with laws that restrict to whom clinical laboratories may release copies of laboratory test results and to describe how these laws may affect pharmacists' ability to obtain patient laboratory test results. Researchers examined state statutes and administrative codes for all 50 states and the District of Columbia at the University of Iowa Law Library between June and July 2007. Researchers also consulted with lawyers, state Clinical Laboratory Improvement Amendments officers, and law librarians. Laws relating to the study objective were analyzed. 34 jurisdictions do not restrict the release of laboratory test results, while 17 states have laws that restrict to whom clinical laboratories can send copies of test results. In these states, pharmacists will have to use alternative sources, such as physician offices, to obtain test results. Pharmacists must consider state law before requesting copies of laboratory test results from clinical laboratories. This may be an issue that state pharmacy associations can address to increase pharmacist access to important patient information.

  15. Validation of an electronic device for measuring driving exposure.

    PubMed

    Huebner, Kyla D; Porter, Michelle M; Marshall, Shawn C

    2006-03-01

    This study sought to evaluate an on-board diagnostic system (CarChip) for collecting driving exposure data in older drivers. Drivers (N = 20) aged 60 to 86 years from Winnipeg and surrounding communities participated. Information on driving exposure was obtained via the CarChip and global positioning system (GPS) technology on a driving course, and obtained via the CarChip and surveys over a week of driving. Velocities and distances were measured over the road course to validate the accuracy of the CarChip compared to GPS for those parameters. The results show that the CarChip does provide valid distance measurements and slightly lower maximum velocities than GPS measures. From the results obtained in this study, it was determined that retrospective self-reports of weekly driving distances are inaccurate. Therefore, an on-board diagnostic system (OBDII) electronic device like the CarChip can provide valid and detailed information about driving exposure that would be useful for studies of crash rates or driving behavior.

  16. Validation results of specifications for motion control interoperability

    NASA Astrophysics Data System (ADS)

    Szabo, Sandor; Proctor, Frederick M.

    1997-01-01

    The National Institute of Standards and Technology (NIST) is participating in the Department of Energy Technologies Enabling Agile Manufacturing (TEAM) program to establish interface standards for machine tool, robot, and coordinate measuring machine controllers. At NIST, the focus is to validate potential application programming interfaces (APIs) that make it possible to exchange machine controller components with a minimal impact on the rest of the system. This validation is taking place in the enhanced machine controller (EMC) consortium and is in cooperation with users and vendors of motion control equipment. An area of interest is motion control, including closed-loop control of individual axes and coordinated path planning. Initial tests of the motion control APIs are complete. The APIs were implemented on two commercial motion control boards that run on two different machine tools. The results for a baseline set of APIs look promising, but several issues were raised. These include resolving differing approaches in how motions are programmed and defining a standard measurement of performance for motion control. This paper starts with a summary of the process used in developing a set of specifications for motion control interoperability. Next, the EMC architecture and its classification of motion control APIs into two classes, Servo Control and Trajectory Planning, are reviewed. Selected APIs are presented to explain the basic functionality and some of the major issues involved in porting the APIs to other motion controllers. The paper concludes with a summary of the main issues and ways to continue the standards process.

  17. Validation of microbiological testing in cardiovascular tissue banks: results of a quality round trial.

    PubMed

    de By, Theo M M H; McDonald, Carl; Süßner, Susanne; Davies, Jill; Heng, Wee Ling; Jashari, Ramadan; Bogers, Ad J J C; Petit, Pieter

    2017-11-01

    Surgeons needing human cardiovascular tissue for implantation in their patients are confronted with cardiovascular tissue banks that use different methods to identify and decontaminate micro-organisms. To elucidate these differences, we compared the quality of processing methods in 20 tissue banks and 1 reference laboratory. We did this to validate the results for accepting or rejecting tissue. We included the decontamination methods used and the influence of antibiotic cocktails and residues with results and controls. The minor details of the processes were not included. To compare the outcomes of microbiological testing and decontamination methods of heart valve allografts in cardiovascular tissue banks, an international quality round was organized. Twenty cardiovascular tissue banks participated in this quality round. The quality round method was validated first and consisted of sending purposely contaminated human heart valve tissue samples with known micro-organisms to the participants. The participants identified the micro-organisms using their local decontamination methods. Seventeen of the 20 participants correctly identified the micro-organisms; if these samples were heart valves to be released for implantation, 3 of the 20 participants would have decided to accept their result for release. Decontamination was shown not to be effective in 13 tissue banks because of growth of the organisms after decontamination. Articles in the literature revealed that antibiotics are effective at 36°C and not, or less so, at 2-8°C. The decontamination procedure, if it is validated, will ensure that the tissue contains no known micro-organisms. This study demonstrates that the quality round method of sending contaminated tissues and assessing the results of the microbiological cultures is an effective way of validating the processes of tissue banks. Only when harmonization, based on validated methods, has been achieved, will surgeons be able to fully rely on the methods

  18. MotiveValidator: interactive web-based validation of ligand and residue structure in biomolecular complexes.

    PubMed

    Vařeková, Radka Svobodová; Jaiswal, Deepti; Sehnal, David; Ionescu, Crina-Maria; Geidl, Stanislav; Pravda, Lukáš; Horský, Vladimír; Wimmerová, Michaela; Koča, Jaroslav

    2014-07-01

    Structure validation has become a major issue in the structural biology community, and an essential step is checking the ligand structure. This paper introduces MotiveValidator, a web-based application for the validation of ligands and residues in PDB or PDBx/mmCIF format files provided by the user. Specifically, MotiveValidator is able to evaluate in a straightforward manner whether the ligand or residue being studied has a correct annotation (3-letter code), i.e. if it has the same topology and stereochemistry as the model ligand or residue with this annotation. If not, MotiveValidator explicitly describes the differences. MotiveValidator offers a user-friendly, interactive and platform-independent environment for validating structures obtained by any type of experiment. The results of the validation are presented in both tabular and graphical form, facilitating their interpretation. MotiveValidator can process thousands of ligands or residues in a single validation run that takes no more than a few minutes. MotiveValidator can be used for testing single structures, or the analysis of large sets of ligands or fragments prepared for binding site analysis, docking or virtual screening. MotiveValidator is freely available via the Internet at http://ncbr.muni.cz/MotiveValidator. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  19. Method for validating cloud mask obtained from satellite measurements using ground-based sky camera.

    PubMed

    Letu, Husi; Nagao, Takashi M; Nakajima, Takashi Y; Matsumae, Yoshiaki

    2014-11-01

    Error propagation in Earth's atmospheric, oceanic, and land surface parameters of the satellite products caused by misclassification of the cloud mask is a critical issue for improving the accuracy of satellite products. Thus, characterizing the accuracy of the cloud mask is important for investigating the influence of the cloud mask on satellite products. In this study, we proposed a method for validating multiwavelength satellite data derived cloud masks using ground-based sky camera (GSC) data. First, a cloud cover algorithm for GSC data has been developed using sky index and bright index. Then, Moderate Resolution Imaging Spectroradiometer (MODIS) satellite data derived cloud masks by two cloud-screening algorithms (i.e., MOD35 and CLAUDIA) were validated using the GSC cloud mask. The results indicate that MOD35 is likely to classify ambiguous pixels as "cloudy," whereas CLAUDIA is likely to classify them as "clear." Furthermore, the influence of error propagations caused by misclassification of the MOD35 and CLAUDIA cloud masks on MODIS derived reflectance, brightness temperature, and normalized difference vegetation index (NDVI) in clear and cloudy pixels was investigated using sky camera data. It shows that the influence of the error propagation by the MOD35 cloud mask on the MODIS derived monthly mean reflectance, brightness temperature, and NDVI for clear pixels is significantly smaller than for the CLAUDIA cloud mask; the influence of the error propagation by the CLAUDIA cloud mask on MODIS derived monthly mean cloud products for cloudy pixels is significantly smaller than that by the MOD35 cloud mask.

  20. 42 CFR 476.85 - Conclusive effect of QIO initial denial determinations and changes as a result of DRG validations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... determinations and changes as a result of DRG validations. 476.85 Section 476.85 Public Health CENTERS FOR... denial determinations and changes as a result of DRG validations. A QIO initial denial determination or change as a result of DRG validation is final and binding unless, in accordance with the procedures in...

  1. 42 CFR 476.85 - Conclusive effect of QIO initial denial determinations and changes as a result of DRG validations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... determinations and changes as a result of DRG validations. 476.85 Section 476.85 Public Health CENTERS FOR... denial determinations and changes as a result of DRG validations. A QIO initial denial determination or change as a result of DRG validation is final and binding unless, in accordance with the procedures in...

  2. 42 CFR 476.85 - Conclusive effect of QIO initial denial determinations and changes as a result of DRG validations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... determinations and changes as a result of DRG validations. 476.85 Section 476.85 Public Health CENTERS FOR... denial determinations and changes as a result of DRG validations. A QIO initial denial determination or change as a result of DRG validation is final and binding unless, in accordance with the procedures in...

  3. Validation of electronic medical record-based phenotyping algorithms: results and lessons learned from the eMERGE network.

    PubMed

    Newton, Katherine M; Peissig, Peggy L; Kho, Abel Ngo; Bielinski, Suzette J; Berg, Richard L; Choudhary, Vidhu; Basford, Melissa; Chute, Christopher G; Kullo, Iftikhar J; Li, Rongling; Pacheco, Jennifer A; Rasmussen, Luke V; Spangler, Leslie; Denny, Joshua C

    2013-06-01

    Genetic studies require precise phenotype definitions, but electronic medical record (EMR) phenotype data are recorded inconsistently and in a variety of formats. To present lessons learned about validation of EMR-based phenotypes from the Electronic Medical Records and Genomics (eMERGE) studies. The eMERGE network created and validated 13 EMR-derived phenotype algorithms. Network sites are Group Health, Marshfield Clinic, Mayo Clinic, Northwestern University, and Vanderbilt University. By validating EMR-derived phenotypes we learned that: (1) multisite validation improves phenotype algorithm accuracy; (2) targets for validation should be carefully considered and defined; (3) specifying time frames for review of variables eases validation time and improves accuracy; (4) using repeated measures requires defining the relevant time period and specifying the most meaningful value to be studied; (5) patient movement in and out of the health plan (transience) can result in incomplete or fragmented data; (6) the review scope should be defined carefully; (7) particular care is required in combining EMR and research data; (8) medication data can be assessed using claims, medications dispensed, or medications prescribed; (9) algorithm development and validation work best as an iterative process; and (10) validation by content experts or structured chart review can provide accurate results. Despite the diverse structure of the five EMRs of the eMERGE sites, we developed, validated, and successfully deployed 13 electronic phenotype algorithms. Validation is a worthwhile process that not only measures phenotype performance but also strengthens phenotype algorithm definitions and enhances their inter-institutional sharing.

  4. [Short evaluation of cognitive state in advanced stages of dementia: preliminary results of the Spanish validation of the Severe Mini-Mental State Examination].

    PubMed

    Buiza, Cristina; Navarro, Ana; Díaz-Orueta, Unai; González, Mari Feli; Alaba, Javier; Arriola, Enrique; Hernández, Carmen; Zulaica, Amaia; Yanguas, José Javier

    2011-01-01

    The cognitive assessment of patients with advanced dementia needs proper screening instruments that allow obtain information about the cognitive state and resources that these individuals still have. The present work conducts a Spanish validation study of the Severe Mini Mental State Examination (SMMSE). Forty-seven patients with advanced dementia (Mini-Cognitive Examination [MEC]<11) were evaluated with the Reisberg's Global Deterioration Scale, MEC, SMMSE and Severe Cognitive Impairment Profile scales. All test items were discriminative. The test showed high internal (α=0.88), test-retest (0.64 to 1.00, P<.01) and between observers reliabilities (0.69-1.00, p<0.01), both for scores total and for each item separately. Construct validity was tested through correlations between the instrument and MEC scores (r=0.59, P<0.01). Further information on the construct validity was obtained by dividing the sample into groups that scored above or below 5 points in the MEC and recalculating their correlations with SMMSE. The correlation between the scores in the SMMSE and MEC was significant in the MEC 0-5 group (r=0.55, P<.05), but not in the MEC>5 group. Additionally, differences in scores were found in the SMMSE, but not in the MEC, between the three GDS groups (5, 6 and 7) (H=11.1, P<.05). The SMMSE is an instrument for the assessment of advanced cognitive impairment which prevents the floor effect through an extension of lower measurement range relative to that of the MEC. From our results, this rapid screening tool and easy to administer, can be considered valid and reliable. Copyright © 2010 SEGG. Published by Elsevier Espana. All rights reserved.

  5. 42 CFR 476.85 - Conclusive effect of QIO initial denial determinations and changes as a result of DRG validations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... determinations and changes as a result of DRG validations. 476.85 Section 476.85 Public Health CENTERS FOR... changes as a result of DRG validations. A QIO initial denial determination or change as a result of DRG validation is final and binding unless, in accordance with the procedures in part 473— (a) The initial denial...

  6. Results obtained with a low cost software-based audiometer for hearing screening.

    PubMed

    Ferrari, Deborah Viviane; Lopez, Esteban Alejandro; Lopes, Andrea Cintra; Aiello, Camila Piccini; Jokura, Pricila Reis

    2013-07-01

     The implementation of hearing screening programs can be facilitated by reducing operating costs, including the cost of equipment. The Telessaúde (TS) audiometer is a low-cost, software-based, and easy-to-use piece of equipment for conducting audiometric screening.  To evaluate the TS audiometer for conducting audiometric screening.  A prospective randomized study was performed. Sixty subjects, divided into those who did not have (group A, n = 30) and those who had otologic complaints (group B, n = 30), underwent audiometric screening with conventional and TS audiometers in a randomized order. Pure tones at 25 dB HL were presented at frequencies of 500, 1000, 2000, and 4000 Hz. A "fail" result was considered when the individual failed to respond to at least one of the stimuli. Pure-tone audiometry was also performed on all participants. The concordance of the results of screening with both audiometers was evaluated. The sensitivity, specificity, and positive and negative predictive values of screening with the TS audiometer were calculated.  For group A, 100% of the ears tested passed the screening. For group B, "pass" results were obtained in 34.2% (TS) and 38.3% (conventional) of the ears tested. The agreement between procedures (TS vs. conventional) ranged from 93% to 98%. For group B, screening with the TS audiometer showed 95.5% sensitivity, 90.4% sensitivity, and positive and negative predictive values equal to 94.9% and 91.5%, respectively.  The results of the TS audiometer were similar to those obtained with the conventional audiometer, indicating that the TS audiometer can be used for audiometric screening.

  7. Spatial distribution of soil moisture obtained from gravimetric and TDR methods for SMOS validation, at the Polesie test site SVRT 3275, in Poland

    NASA Astrophysics Data System (ADS)

    Usowicz, B.; Marczewski, W.; Lipiec, J.; Usowicz, J. B.; Sokolowska, Z.; Dabkowska-Naskret, H.; Hajnos, M.; Lukowski, M. I.

    2009-04-01

    vegetation season. Permanent measurements are provided in profiles, down to 50 cm below surface. Temporary SM measurements are collected by hand held TDR (FOM/mts type, Easy Test Ltd., Lublin, Poland) from the top surface layer (1-6 cm), in a grid covering small and large areas, containing few hundred sites. The same places are served by collecting soil samples for the gravimetric analysis of SM, bulk density, other physical and textural characteristics. Sessions on measurement in large areas on the scale of community are repeated for separate days. The two methods used were compared with correlation coefficient, regression equation and differences of values. The spatial variability of soil moisture from gravimetric and TDR measurements were analyzed using geostatistical methods. The semivariogram parameters were determined and mathematical functions were fitted to empirically derived semivariograms. These functions were used for estimation of spatial distribution of soil moisture in cultivated fields by the kriging method. The results showed that spatial distribution patterns of topsoil soil moisture in the investigated areas obtained from TDR and gravimetric methods were in general similar to each other. The TDR soil moisture contents were dependent on bulk density and texture of soil. In areas with fine-textured soils of lower soil bulk densities (approximately below 1.35 Mg m^-3) we observed that TDR soil moisture and spatial differentiation were greater compared to those with gravimetric method. However at higher bulk densities the inverse was true. The spatial patterns were further modified in areas with domination of coarse-textured soils. Decrease of measurement points results in smoothing soil moisture pattern and at the same time in a greater estimation error. The TDR method can be useful tool for ground moisture measurements and validation of satellite data. The use of specific calibration or correction for soil bulk density and texture with respect to the

  8. Challenges in validating model results for first year ice

    NASA Astrophysics Data System (ADS)

    Melsom, Arne; Eastwood, Steinar; Xie, Jiping; Aaboe, Signe; Bertino, Laurent

    2017-04-01

    In order to assess the quality of model results for the distribution of first year ice, a comparison with a product based on observations from satellite-borne instruments has been performed. Such a comparison is not straightforward due to the contrasting algorithms that are used in the model product and the remote sensing product. The implementation of the validation is discussed in light of the differences between this set of products, and validation results are presented. The model product is the daily updated 10-day forecast from the Arctic Monitoring and Forecasting Centre in CMEMS. The forecasts are produced with the assimilative ocean prediction system TOPAZ. Presently, observations of sea ice concentration and sea ice drift are introduced in the assimilation step, but data for sea ice thickness and ice age (or roughness) are not included. The model computes the age of the ice by recording and updating the time passed after ice formation as sea ice grows and deteriorates as it is advected inside the model domain. Ice that is younger than 365 days is classified as first year ice. The fraction of first-year ice is recorded as a tracer in each grid cell. The Ocean and Sea Ice Thematic Assembly Centre in CMEMS redistributes a daily product from the EUMETSAT OSI SAF of gridded sea ice conditions which include "ice type", a representation of the separation of regions between those infested by first year ice, and those infested by multi-year ice. The ice type is parameterized based on data for the gradient ratio GR(19,37) from SSMIS observations, and from the ASCAT backscatter parameter. This product also includes information on ambiguity in the processing of the remote sensing data, and the product's confidence level, which have a strong seasonal dependency.

  9. Worldwide Protein Data Bank validation information: usage and trends.

    PubMed

    Smart, Oliver S; Horský, Vladimír; Gore, Swanand; Svobodová Vařeková, Radka; Bendová, Veronika; Kleywegt, Gerard J; Velankar, Sameer

    2018-03-01

    Realising the importance of assessing the quality of the biomolecular structures deposited in the Protein Data Bank (PDB), the Worldwide Protein Data Bank (wwPDB) partners established Validation Task Forces to obtain advice on the methods and standards to be used to validate structures determined by X-ray crystallography, nuclear magnetic resonance spectroscopy and three-dimensional electron cryo-microscopy. The resulting wwPDB validation pipeline is an integral part of the wwPDB OneDep deposition, biocuration and validation system. The wwPDB Validation Service webserver (https://validate.wwpdb.org) can be used to perform checks prior to deposition. Here, it is shown how validation metrics can be combined to produce an overall score that allows the ranking of macromolecular structures and domains in search results. The ValTrends DB database provides users with a convenient way to access and analyse validation information and other properties of X-ray crystal structures in the PDB, including investigating trends in and correlations between different structure properties and validation metrics.

  10. 42 CFR 476.94 - Notice of QIO initial denial determination and changes as a result of a DRG validation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... changes as a result of a DRG validation. 476.94 Section 476.94 Public Health CENTERS FOR MEDICARE... changes as a result of a DRG validation. (a) Notice of initial denial determination—(1) Parties to be... working days of identification; (vi) For retrospective review, (excluding DRG validation and post...

  11. Satellite stratospheric aerosol measurement validation

    NASA Technical Reports Server (NTRS)

    Russell, P. B.; Mccormick, M. P.

    1984-01-01

    The validity of the stratospheric aerosol measurements made by the satellite sensors SAM II and SAGE was tested by comparing their results with each other and with results obtained by other techniques (lider, dustsonde, filter, and impactor). The latter type of comparison required the development of special techniques that convert the quantity measured by the correlative sensor (e.g. particle backscatter, number, or mass) to that measured by the satellite sensor (extinction) and quantitatively estimate the uncertainty in the conversion process. The results of both types of comparisons show agreement within the measurement and conversion uncertainties. Moreover, the satellite uncertainty is small compared to aerosol natural variability (caused by seasonal changes, volcanoes, sudden warmings, and vortex structure). It was concluded that the satellite measurements are valid.

  12. Simulated Driving Assessment (SDA) for Teen Drivers: Results from a Validation Study

    PubMed Central

    McDonald, Catherine C.; Kandadai, Venk; Loeb, Helen; Seacrist, Thomas S.; Lee, Yi-Ching; Winston, Zachary; Winston, Flaura K.

    2015-01-01

    Background Driver error and inadequate skill are common critical reasons for novice teen driver crashes, yet few validated, standardized assessments of teen driving skills exist. The purpose of this study was to evaluate the construct and criterion validity of a newly developed Simulated Driving Assessment (SDA) for novice teen drivers. Methods The SDA's 35-minute simulated drive incorporates 22 variations of the most common teen driver crash configurations. Driving performance was compared for 21 inexperienced teens (age 16–17 years, provisional license ≤90 days) and 17 experienced adults (age 25–50 years, license ≥5 years, drove ≥100 miles per week, no collisions or moving violations ≤3 years). SDA driving performance (Error Score) was based on driving safety measures derived from simulator and eye-tracking data. Negative driving outcomes included simulated collisions or run-off-the-road incidents. A professional driving evaluator/instructor reviewed videos of SDA performance (DEI Score). Results The SDA demonstrated construct validity: 1.) Teens had a higher Error Score than adults (30 vs. 13, p=0.02); 2.) For each additional error committed, the relative risk of a participant's propensity for a simulated negative driving outcome increased by 8% (95% CI: 1.05–1.10, p<0.01). The SDA demonstrated criterion validity: Error Score was correlated with DEI Score (r=−0.66, p<0.001). Conclusions This study supports the concept of validated simulated driving tests like the SDA to assess novice driver skill in complex and hazardous driving scenarios. The SDA, as a standard protocol to evaluate teen driver performance, has the potential to facilitate screening and assessment of teen driving readiness and could be used to guide targeted skill training. PMID:25740939

  13. Impact of selective posterior rhizotomy on fine motor skills. Long-term results using a validated evaluative measure.

    PubMed

    Mittal, Sandeep; Farmer, Jean-Pierre; Al-Atassi, Borhan; Montpetit, Kathleen; Gervais, Nathalie; Poulin, Chantal; Cantin, Marie-André; Benaroch, Thierry E

    2002-03-01

    Suprasegmental effects following selective posterior rhizotomy have been frequently reported. However, few studies have used validated functional outcome measures to report the surgical results beyond 3 years. The authors analyzed data obtained from the McGill Rhizotomy Database to determine the long-term impact of lumbosacral dorsal rhizotomy on fine motor skills. The study population comprised children with debilitating spasticity who underwent SPR and were evaluated by a multidisciplinary team preoperatively, at 6 months and 1 year postoperatively. Quantitative standardized assessments of upper extremity function were obtained using the fine motor skills section of the Peabody Developmental Motor Scales (PDMS) test. Of 70 patients who met the entry criteria for the study, 45 and 25 completed the 3- and 5-year assessments, respectively. Statistical analysis demonstrated significant improvements in grasping, hand use, eye-hand coordination, and manual dexterity at 1 year after SPR. More importantly, all improvements were maintained at 3 and 5 years following SPR. This study supports that significant improvements in upper extremity fine motor function using the PDMS evaluative measure are present after SPR and that these suprasegmental benefits are durable. Copyright 2002 S. Karger AG, Basel

  14. Complete Statistical Survey Results of 1982 Texas Competency Validation Project.

    ERIC Educational Resources Information Center

    Rogers, Sandra K.; Dahlberg, Maurine F.

    This report documents a project to develop current statewide validated competencies for auto mechanics, diesel mechanics, welding, office occupations, and printing. Section 1 describes the four steps used in the current competency validation project and provides a standardized process for conducting future studies at the local or statewide level.…

  15. 42 CFR 476.94 - Notice of QIO initial denial determination and changes as a result of a DRG validation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... changes as a result of a DRG validation. 476.94 Section 476.94 Public Health CENTERS FOR MEDICARE... changes as a result of a DRG validation. (a) Notice of initial denial determination—(1) Parties to be... retrospective review, (excluding DRG validation and post procedure review), within 3 working days of the initial...

  16. 42 CFR 476.94 - Notice of QIO initial denial determination and changes as a result of a DRG validation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... changes as a result of a DRG validation. 476.94 Section 476.94 Public Health CENTERS FOR MEDICARE... changes as a result of a DRG validation. (a) Notice of initial denial determination—(1) Parties to be... retrospective review, (excluding DRG validation and post procedure review), within 3 working days of the initial...

  17. Base Flow Model Validation

    NASA Technical Reports Server (NTRS)

    Sinha, Neeraj; Brinckman, Kevin; Jansen, Bernard; Seiner, John

    2011-01-01

    A method was developed of obtaining propulsive base flow data in both hot and cold jet environments, at Mach numbers and altitude of relevance to NASA launcher designs. The base flow data was used to perform computational fluid dynamics (CFD) turbulence model assessments of base flow predictive capabilities in order to provide increased confidence in base thermal and pressure load predictions obtained from computational modeling efforts. Predictive CFD analyses were used in the design of the experiments, available propulsive models were used to reduce program costs and increase success, and a wind tunnel facility was used. The data obtained allowed assessment of CFD/turbulence models in a complex flow environment, working within a building-block procedure to validation, where cold, non-reacting test data was first used for validation, followed by more complex reacting base flow validation.

  18. System Validation Experiments for Obtaining Tracer Laser-Induced Fluorescence Data at Elevated Pressure and Temperature.

    PubMed

    Hartwig, Jason; Mittal, Gaurav; Kumar, Kamal; Sung, Chih-Jen

    2018-04-01

    This paper presents a set of system validation experiments that can be used to qualify either static or flow experimental systems for gathering tracer photophysical data or conducting laser diagnostics at high pressure and temperature in order to establish design and operation limits and reduce uncertainty in data interpretation. Tests demonstrated here quantify the effect of tracer absorption at the test cell walls, stratification, photolysis, pyrolysis, adequacy of mixing and seeding, and reabsorption of laser light using acetone as the tracer and 282 nm excitation. Results show that acetone exhibits a 10% decrease in fluorescence signal over 36 000 shots at 127.4 mJ/cm 2 , and photolysis is negligible below 1000 shots collected. Meanwhile, appropriately chosen gas residence times can mitigate risks due to pyrolysis and inadequate mixing and seeding; for the current work 100 ms residence time ensured <0.5% alteration of tracer number density due to thermal destruction. Experimental results here are compared to theoretical values from the literature.

  19. Validating the Adolescent Form of the Substance Abuse Subtle Screening Inventory.

    ERIC Educational Resources Information Center

    Risberg, Richard A.; And Others

    1995-01-01

    Tests validity of the Substance Abuse Subtle Screening Inventory (SASSI) in detecting chemical dependency in adolescents (n=107), when compared to the Minnesota Multiphasic Personality Inventory (MMPI) results. Further validation for the SASSI was obtained. Treatment implications and suggestions for further research are provided. (SNR)

  20. Validity and Reliability of a Systematic Database Search Strategy to Identify Publications Resulting From Pharmacy Residency Research Projects.

    PubMed

    Kwak, Namhee; Swan, Joshua T; Thompson-Moore, Nathaniel; Liebl, Michael G

    2016-08-01

    This study aims to develop a systematic search strategy and test its validity and reliability in terms of identifying projects published in peer-reviewed journals as reported by residency graduates through an online survey. This study was a prospective blind comparison to a reference standard. Pharmacy residency projects conducted at the study institution between 2001 and 2012 were included. A step-wise, systematic procedure containing up to 8 search strategies in PubMed and EMBASE for each project was created using the names of authors and abstract keywords. In order to further maximize sensitivity, complex phrases with multiple variations were truncated to the root word. Validity was assessed by obtaining information on publications from an online survey deployed to residency graduates. The search strategy identified 13 publications (93% sensitivity, 100% specificity, and 99% accuracy). Both methods identified a similar proportion achieving publication (19.7% search strategy vs 21.2% survey, P = 1.00). Reliability of the search strategy was affirmed by the perfect agreement between 2 investigators (k = 1.00). This systematic search strategy demonstrated a high sensitivity, specificity, and accuracy for identifying publications resulting from pharmacy residency projects using information available in residency conference abstracts. © The Author(s) 2015.

  1. Anatomical landmark position--can we trust what we see? Results from an online reliability and validity study of osteopaths.

    PubMed

    Pattyn, Elise; Rajendran, Dévan

    2014-04-01

    Practitioners traditionally use observation to classify the position of patients' anatomical landmarks. This information may contribute to diagnosis and patient management. To calculate a) Inter-rater reliability of categorising the sagittal plane position of four anatomical landmarks (lateral femoral epicondyle, greater trochanter, mastoid process and acromion) on side-view photographs (with landmarks highlighted and not-highlighted) of anonymised subjects; b) Intra-rater reliability; c) Individual landmark inter-rater reliability; d) Validity against a 'gold standard' photograph. Online inter- and intra-rater reliability study. Photographed subjects: convenience sample of asymptomatic students; raters: randomly selected UK registered osteopaths. 40 photographs of 30 subjects were used, a priori clinically acceptable reliability was ≥0.4. Inter-rater arm: 20 photographs without landmark highlights plus 10 with highlights; Intra-rater arm: 10 duplicate photographs (non-highlighted landmarks). Validity arm: highlighted landmark scores versus 'gold standard' photographs with vertical line. Research ethics approval obtained. Osteopaths (n = 48) categorised landmark position relative to imagined vertical-line; Gwet's Agreement Coefficient 1 (AC1) calculated and chance-corrected coefficient benchmarked against Landis and Koch's scale; Validity calculation used Kendall's tau-B. Inter-rater reliability was 'fair' (AC1 = 0.342; 95% confidence interval (CI) = 0.279-0.404) for non-highlighted landmarks and 'moderate' (AC1 = 0.700; 95% CI = 0.596-0.805) for highlighted landmarks. Intra-rater reliability was 'fair' (AC1 = 0.522); range was 'poor' (AC1 = 0.160) to 'substantial' (AC1 = 0.896). No differences were found between individual landmarks. Validity was 'low' (TB = 0.327; p = 0.104). Both inter- and intra-rater reliability was 'fair' but below clinically acceptable levels, validity was 'low'. Together these results challenge the clinical practice of

  2. Worldwide Protein Data Bank validation information: usage and trends

    PubMed Central

    Horský, Vladimír; Gore, Swanand; Svobodová Vařeková, Radka; Bendová, Veronika

    2018-01-01

    Realising the importance of assessing the quality of the biomolecular structures deposited in the Protein Data Bank (PDB), the Worldwide Protein Data Bank (wwPDB) partners established Validation Task Forces to obtain advice on the methods and standards to be used to validate structures determined by X-ray crystallography, nuclear magnetic resonance spectroscopy and three-dimensional electron cryo-microscopy. The resulting wwPDB validation pipeline is an integral part of the wwPDB OneDep deposition, biocuration and validation system. The wwPDB Validation Service webserver (https://validate.wwpdb.org) can be used to perform checks prior to deposition. Here, it is shown how validation metrics can be combined to produce an overall score that allows the ranking of macromolecular structures and domains in search results. The ValTrendsDB database provides users with a convenient way to access and analyse validation information and other properties of X-ray crystal structures in the PDB, including investigating trends in and correlations between different structure properties and validation metrics. PMID:29533231

  3. Analysis procedures and subjective flight results of a simulator validation and cue fidelity experiment

    NASA Technical Reports Server (NTRS)

    Carr, Peter C.; Mckissick, Burnell T.

    1988-01-01

    A joint experiment to investigate simulator validation and cue fidelity was conducted by the Dryden Flight Research Facility of NASA Ames Research Center (Ames-Dryden) and NASA Langley Research Center. The primary objective was to validate the use of a closed-loop pilot-vehicle mathematical model as an analytical tool for optimizing the tradeoff between simulator fidelity requirements and simulator cost. The validation process includes comparing model predictions with simulation and flight test results to evaluate various hypotheses for differences in motion and visual cues and information transfer. A group of five pilots flew air-to-air tracking maneuvers in the Langley differential maneuvering simulator and visual motion simulator and in an F-14 aircraft at Ames-Dryden. The simulators used motion and visual cueing devices including a g-seat, a helmet loader, wide field-of-view horizon, and a motion base platform.

  4. Results of the 2013 UT modeling benchmark obtained with models implemented in CIVA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toullelan, Gwénaël; Raillon, Raphaële; Chatillon, Sylvain

    The 2013 Ultrasonic Testing (UT) modeling benchmark concerns direct echoes from side drilled holes (SDH), flat bottom holes (FBH) and corner echoes from backwall breaking artificial notches inspected with a matrix phased array probe. This communication presents the results obtained with the models implemented in the CIVA software: the pencilmodel is used to compute the field radiated by the probe, the Kirchhoff approximation is applied to predict the response of FBH and notches and the SOV (Separation Of Variables) model is used for the SDH responses. The comparison between simulated and experimental results are presented and discussed.

  5. Comparison of results obtained with various sensors used to measure fluctuating quantities in jets.

    NASA Technical Reports Server (NTRS)

    Parthasarathy, S. P.; Massier, P. F.; Cuffel, R. F.

    1973-01-01

    An experimental investigation has been conducted to compare the results obtained with six different instruments that sense fluctuating quantities in free jets. These sensors are typical of those that have recently been used by various investigators who are engaged in experimental studies of jet noise. Intensity distributions and two-point correlations with space separation and time delay were obtained. The static pressure, density, and velocity fluctuations are well correlated over the entire cross section of the jet and the cross-correlations persist for several jet diameters along the flow direction. The eddies appear to be flattened in the flow direction by a ratio of 0.4.

  6. Brazilian Center for the Validation of Alternative Methods (BraCVAM) and the process of validation in Brazil.

    PubMed

    Presgrave, Octavio; Moura, Wlamir; Caldeira, Cristiane; Pereira, Elisabete; Bôas, Maria H Villas; Eskes, Chantra

    2016-03-01

    The need for the creation of a Brazilian centre for the validation of alternative methods was recognised in 2008, and members of academia, industry and existing international validation centres immediately engaged with the idea. In 2012, co-operation between the Oswaldo Cruz Foundation (FIOCRUZ) and the Brazilian Health Surveillance Agency (ANVISA) instigated the establishment of the Brazilian Center for the Validation of Alternative Methods (BraCVAM), which was officially launched in 2013. The Brazilian validation process follows OECD Guidance Document No. 34, where BraCVAM functions as the focal point to identify and/or receive requests from parties interested in submitting tests for validation. BraCVAM then informs the Brazilian National Network on Alternative Methods (RENaMA) of promising assays, which helps with prioritisation and contributes to the validation studies of selected assays. A Validation Management Group supervises the validation study, and the results obtained are peer-reviewed by an ad hoc Scientific Review Committee, organised under the auspices of BraCVAM. Based on the peer-review outcome, BraCVAM will prepare recommendations on the validated test method, which will be sent to the National Council for the Control of Animal Experimentation (CONCEA). CONCEA is in charge of the regulatory adoption of all validated test methods in Brazil, following an open public consultation. 2016 FRAME.

  7. Obtaining Valid Safety Data for Software Safety Measurement and Process Improvement

    NASA Technical Reports Server (NTRS)

    Basili, Victor r.; Zelkowitz, Marvin V.; Layman, Lucas; Dangle, Kathleen; Diep, Madeline

    2010-01-01

    We report on a preliminary case study to examine software safety risk in the early design phase of the NASA Constellation spaceflight program. Our goal is to provide NASA quality assurance managers with information regarding the ongoing state of software safety across the program. We examined 154 hazard reports created during the preliminary design phase of three major flight hardware systems within the Constellation program. Our purpose was two-fold: 1) to quantify the relative importance of software with respect to system safety; and 2) to identify potential risks due to incorrect application of the safety process, deficiencies in the safety process, or the lack of a defined process. One early outcome of this work was to show that there are structural deficiencies in collecting valid safety data that make software safety different from hardware safety. In our conclusions we present some of these deficiencies.

  8. Validity of proposed DSM-5 diagnostic criteria for nicotine use disorder: results from 734 Israeli lifetime smokers

    PubMed Central

    Shmulewitz, D.; Wall, M.M.; Aharonovich, E.; Spivak, B.; Weizman, A.; Frisch, A.; Grant, B. F.; Hasin, D.

    2013-01-01

    Background The fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) proposes aligning nicotine use disorder (NUD) criteria with those for other substances, by including the current DSM fourth edition (DSM-IV) nicotine dependence (ND) criteria, three abuse criteria (neglect roles, hazardous use, interpersonal problems) and craving. Although NUD criteria indicate one latent trait, evidence is lacking on: (1) validity of each criterion; (2) validity of the criteria as a set; (3) comparative validity between DSM-5 NUD and DSM-IV ND criterion sets; and (4) NUD prevalence. Method Nicotine criteria (DSM-IV ND, abuse and craving) and external validators (e.g. smoking soon after awakening, number of cigarettes per day) were assessed with a structured interview in 734 lifetime smokers from an Israeli household sample. Regression analysis evaluated the association between validators and each criterion. Receiver operating characteristic analysis assessed the association of the validators with the DSM-5 NUD set (number of criteria endorsed) and tested whether DSM-5 or DSM-IV provided the most discriminating criterion set. Changes in prevalence were examined. Results Each DSM-5 NUD criterion was significantly associated with the validators, with strength of associations similar across the criteria. As a set, DSM-5 criteria were significantly associated with the validators, were significantly more discriminating than DSM-IV ND criteria, and led to increased prevalence of binary NUD (two or more criteria) over ND. Conclusions All findings address previous concerns about the DSM-IV nicotine diagnosis and its criteria and support the proposed changes for DSM-5 NUD, which should result in improved diagnosis of nicotine disorders. PMID:23312475

  9. V-SUIT Model Validation Using PLSS 1.0 Test Results

    NASA Technical Reports Server (NTRS)

    Olthoff, Claas

    2015-01-01

    The dynamic portable life support system (PLSS) simulation software Virtual Space Suit (V-SUIT) has been under development at the Technische Universitat Munchen since 2011 as a spin-off from the Virtual Habitat (V-HAB) project. The MATLAB(trademark)-based V-SUIT simulates space suit portable life support systems and their interaction with a detailed and also dynamic human model, as well as the dynamic external environment of a space suit moving on a planetary surface. To demonstrate the feasibility of a large, system level simulation like V-SUIT, a model of NASA's PLSS 1.0 prototype was created. This prototype was run through an extensive series of tests in 2011. Since the test setup was heavily instrumented, it produced a wealth of data making it ideal for model validation. The implemented model includes all components of the PLSS in both the ventilation and thermal loops. The major components are modeled in greater detail, while smaller and ancillary components are low fidelity black box models. The major components include the Rapid Cycle Amine (RCA) CO2 removal system, the Primary and Secondary Oxygen Assembly (POS/SOA), the Pressure Garment System Volume Simulator (PGSVS), the Human Metabolic Simulator (HMS), the heat exchanger between the ventilation and thermal loops, the Space Suit Water Membrane Evaporator (SWME) and finally the Liquid Cooling Garment Simulator (LCGS). Using the created model, dynamic simulations were performed using same test points also used during PLSS 1.0 testing. The results of the simulation were then compared to the test data with special focus on absolute values during the steady state phases and dynamic behavior during the transition between test points. Quantified simulation results are presented that demonstrate which areas of the V-SUIT model are in need of further refinement and those that are sufficiently close to the test results. Finally, lessons learned from the modelling and validation process are given in combination

  10. 42 CFR 476.94 - Notice of QIO initial denial determination and changes as a result of a DRG validation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... changes as a result of a DRG validation. 476.94 Section 476.94 Public Health CENTERS FOR MEDICARE... DRG validation. (a) Notice of initial denial determination—(1) Parties to be notified. A QIO must... of identification; (vi) For retrospective review, (excluding DRG validation and post procedure review...

  11. [Validation and verfication of microbiology methods].

    PubMed

    Camaró-Sala, María Luisa; Martínez-García, Rosana; Olmos-Martínez, Piedad; Catalá-Cuenca, Vicente; Ocete-Mochón, María Dolores; Gimeno-Cardona, Concepción

    2015-01-01

    Clinical microbiologists should ensure, to the maximum level allowed by the scientific and technical development, the reliability of the results. This implies that, in addition to meeting the technical criteria to ensure their validity, they must be performed with a number of conditions that allows comparable results to be obtained, regardless of the laboratory that performs the test. In this sense, the use of recognized and accepted reference methodsis the most effective tool for these guarantees. The activities related to verification and validation of analytical methods has become very important, as there is continuous development, as well as updating techniques and increasingly complex analytical equipment, and an interest of professionals to ensure quality processes and results. The definitions of validation and verification are described, along with the different types of validation/verification, and the types of methods, and the level of validation necessary depending on the degree of standardization. The situations in which validation/verification is mandatory and/or recommended is discussed, including those particularly related to validation in Microbiology. It stresses the importance of promoting the use of reference strains as controls in Microbiology and the use of standard controls, as well as the importance of participation in External Quality Assessment programs to demonstrate technical competence. The emphasis is on how to calculate some of the parameters required for validation/verification, such as the accuracy and precision. The development of these concepts can be found in the microbiological process SEIMC number 48: «Validation and verification of microbiological methods» www.seimc.org/protocols/microbiology. Copyright © 2013 Elsevier España, S.L.U. y Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.

  12. Wide Angle Imaging Lidar (WAIL): Theory of Operation and Results from Cross-Platform Validation at the ARM Southern Great Plains Site

    NASA Astrophysics Data System (ADS)

    Polonsky, I. N.; Davis, A. B.; Love, S. P.

    2004-05-01

    WAIL was designed to determine physical and geometrical characteristics of optically thick clouds using the off-beam component of the lidar return that can be accurately modeled within the 3D photon diffusion approximation. The theory shows that the WAIL signal depends not only on the cloud optical characteristics (phase function, extinction and scattering coefficients) but also on the outer thickness of the cloud layer. This makes it possible to estimate the mean optical and geometrical thicknesses of the cloud. The comparison with Monte Carlo simulation demonstrates the high accuracy of the diffusion approximation for moderately to very dense clouds. During operation WAIL is able to collect a complete data set from a cloud every few minutes, with averaging over horizontal scale of a kilometer or so. In order to validate WAIL's ability to deliver cloud properties, the LANL instrument was deployed as a part of the THickness from Off-beam Returns (THOR) validation IOP. The goal was to probe clouds above the SGP CART site at night in March 2002 from below (WAIL and ARM instruments) and from NASA's P3 aircraft (carrying THOR, the GSFC counterpart of WAIL) flying above the clouds. The permanent cloud instruments we used to compare with the results obtained from WAIL were ARM's laser ceilometer, micro-pulse lidar (MPL), millimeter-wavelength cloud radar (MMCR), and micro-wave radiometer (MWR). The comparison shows that, in spite of an unusually low cloud ceiling, an unfavorable observation condition for WAIL's present configuration, cloud properties obtained from the new instrument are in good agreement with their counterparts obtained by other instruments. So WAIL can duplicate, at least for single-layer clouds, the cloud products of the MWR and MMCR together. But WAIL does this with green laser light, which is far more representative than microwaves of photon transport processes at work in the climate system.

  13. Development and Validation of New Discriminative Dissolution Method for Carvedilol Tablets

    PubMed Central

    Raju, V.; Murthy, K. V. R.

    2011-01-01

    The objective of the present study was to develop and validate a discriminative dissolution method for evaluation of carvedilol tablets. Different conditions such as type of dissolution medium, volume of dissolution medium and rotation speed of paddle were evaluated. The best in vitro dissolution profile was obtained using Apparatus II (paddle), 50 rpm, 900 ml of pH 6.8 phosphate buffer as dissolution medium. The drug release was evaluated by high-performance liquid chromatographic method. The dissolution method was validated according to current ICH and FDA guidelines using parameters such as the specificity, accuracy, precision and stability were evaluated and obtained results were within the acceptable range. The comparison of the obtained dissolution profiles of three different products were investigated using ANOVA-based, model-dependent and model-independent methods, results showed that there is significant difference between the products. The dissolution test developed and validated was adequate for its higher discriminative capacity in differentiating the release characteristics of the products tested and could be applied for development and quality control of carvedilol tablets. PMID:22923865

  14. Measurements of Humidity in the Atmosphere and Validation Experiments (Mohave, Mohave II): Results Overview

    NASA Technical Reports Server (NTRS)

    Leblanc, Thierry; McDermid, Iain S.; McGee, Thomas G.; Twigg, Laurence W.; Sumnicht, Grant K.; Whiteman, David N.; Rush, Kurt D.; Cadirola, Martin P.; Venable, Demetrius D.; Connell, R.; hide

    2008-01-01

    The Measurements of Humidity in the Atmosphere and Validation Experiments (MOHAVE, MOHAVE-II) inter-comparison campaigns took place at the Jet Propulsion Laboratory (JPL) Table Mountain Facility (TMF, 34.5(sup o)N) in October 2006 and 2007 respectively. Both campaigns aimed at evaluating the capability of three Raman lidars for the measurement of water vapor in the upper troposphere and lower stratosphere (UT/LS). During each campaign, more than 200 hours of lidar measurements were compared to balloon borne measurements obtained from 10 Cryogenic Frost-point Hygrometer (CFH) flights and over 50 Vaisala RS92 radiosonde flights. During MOHAVE, fluorescence in all three lidar receivers was identified, causing a significant wet bias above 10-12 km in the lidar profiles as compared to the CFH. All three lidars were reconfigured after MOHAVE, and no such bias was observed during the MOHAVE-II campaign. The lidar profiles agreed very well with the CFH up to 13-17 km altitude, where the lidar measurements become noise limited. The results from MOHAVE-II have shown that the water vapor Raman lidar will be an appropriate technique for the long-term monitoring of water vapor in the UT/LS given a slight increase in its power-aperture, as well as careful calibration.

  15. Using Riemannian geometry to obtain new results on Dikin and Karmarkar methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliveira, P.; Joao, X.; Piaui, T.

    1994-12-31

    We are motivated by a 1990 Karmarkar paper on Riemannian geometry and Interior Point Methods. In this talk we show 3 results. (1) Karmarkar direction can be derived from the Dikin one. This is obtained by constructing a certain Z(x) representation of the null space of the unitary simplex (e, x) = 1; then the projective direction is the image under Z(x) of the affine-scaling one, when it is restricted to that simplex. (2) Second order information on Dikin and Karmarkar methods. We establish computable Hessians for each of the metrics corresponding to both directions, thus permitting the generation ofmore » {open_quotes}second order{close_quotes} methods. (3) Dikin and Karmarkar geodesic descent methods. For those directions, we make computable the theoretical Luenberger geodesic descent method, since we are able to explicit very accurate expressions of the corresponding geodesics. Convergence results are given.« less

  16. Near-Infrared Scintillation of Liquid Argon: Recent Results Obtained with the NIR Facility at Fermilab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Escobar, C. O.; Rubinov, P.; Tilly, E.

    After a short review of previous attempts to observe and measure the near-infrared scintillation in liquid argon, we present new results obtained with NIR, a dedicated cryostat at the Fermilab Proton Assembly Building (PAB). The new results give confidence that the near-infrared light can be used as the much needed light signal in large liquid argon time projection chambers.11 pages,

  17. Results of Fall 2001 Pilot: Methodology for Validation of Course Prerequisites.

    ERIC Educational Resources Information Center

    Serban, Andreea M.; Fleming, Steve

    The purpose of this study was to test a methodology that will help Santa Barbara City College (SBCC), California, to validate the course prerequisites that fall under the category of highest level of scrutiny--data collection and analysis--as defined by the Chancellor's Office. This study gathered data for the validation of prerequisites for three…

  18. Validation of the content of the prevention protocol for early sepsis caused by Streptococcus agalactiaein newborns

    PubMed Central

    da Silva, Fabiana Alves; Vidal, Cláudia Fernanda de Lacerda; de Araújo, Ednaldo Cavalcante

    2015-01-01

    Abstract Objective: to validate the content of the prevention protocol for early sepsis caused by Streptococcus agalactiaein newborns. Method: a transversal, descriptive and methodological study, with a quantitative approach. The sample was composed of 15 judges, 8 obstetricians and 7 pediatricians. The validation occurred through the assessment of the content of the protocol by the judges that received the instrument for data collection - checklist - which contained 7 items that represent the requisites to be met by the protocol. The validation of the content was achieved by applying the Content Validity Index. Result: in the judging process, all the items that represented requirements considered by the protocol obtained concordance within the established level (Content Validity Index > 0.75). Of 7 items, 6 have obtained full concordance (Content Validity Index 1.0) and the feasibility item obtained a Content Validity Index of 0.93. The global assessment of the instruments obtained a Content Validity Index of 0.99. Conclusion: the validation of content that was done was an efficient tool for the adjustment of the protocol, according to the judgment of experienced professionals, which demonstrates the importance of conducting a previous validation of the instruments. It is expected that this study will serve as an incentive for the adoption of universal tracking by other institutions through validated protocols. PMID:26444165

  19. Object-oriented simulation model of a parabolic trough solar collector: Static and dynamic validation

    NASA Astrophysics Data System (ADS)

    Ubieta, Eduardo; Hoyo, Itzal del; Valenzuela, Loreto; Lopez-Martín, Rafael; Peña, Víctor de la; López, Susana

    2017-06-01

    A simulation model of a parabolic-trough solar collector developed in Modelica® language is calibrated and validated. The calibration is performed in order to approximate the behavior of the solar collector model to a real one due to the uncertainty in some of the system parameters, i.e. measured data is used during the calibration process. Afterwards, the validation of this calibrated model is done. During the validation, the results obtained from the model are compared to the ones obtained during real operation in a collector from the Plataforma Solar de Almeria (PSA).

  20. An integrated bioanalytical method development and validation approach: case studies.

    PubMed

    Xue, Y-J; Melo, Brian; Vallejo, Martha; Zhao, Yuwen; Tang, Lina; Chen, Yuan-Shek; Keller, Karin M

    2012-10-01

    We proposed an integrated bioanalytical method development and validation approach: (1) method screening based on analyte's physicochemical properties and metabolism information to determine the most appropriate extraction/analysis conditions; (2) preliminary stability evaluation using both quality control and incurred samples to establish sample collection, storage and processing conditions; (3) mock validation to examine method accuracy and precision and incurred sample reproducibility; and (4) method validation to confirm the results obtained during method development. This integrated approach was applied to the determination of compound I in rat plasma and compound II in rat and dog plasma. The effectiveness of the approach was demonstrated by the superior quality of three method validations: (1) a zero run failure rate; (2) >93% of quality control results within 10% of nominal values; and (3) 99% incurred sample within 9.2% of the original values. In addition, rat and dog plasma methods for compound II were successfully applied to analyze more than 900 plasma samples obtained from Investigational New Drug (IND) toxicology studies in rats and dogs with near perfect results: (1) a zero run failure rate; (2) excellent accuracy and precision for standards and quality controls; and (3) 98% incurred samples within 15% of the original values. Copyright © 2011 John Wiley & Sons, Ltd.

  1. Validity and reliability of the Diagnostic Adaptive Behaviour Scale.

    PubMed

    Tassé, M J; Schalock, R L; Balboni, G; Spreat, S; Navas, P

    2016-01-01

    The Diagnostic Adaptive Behaviour Scale (DABS) is a new standardised adaptive behaviour measure that provides information for evaluating limitations in adaptive behaviour for the purpose of determining a diagnosis of intellectual disability. This article presents validity evidence and reliability data for the DABS. Validity evidence was based on comparing DABS scores with scores obtained on the Vineland Adaptive Behaviour Scale, second edition. The stability of the test scores was measured using a test and retest, and inter-rater reliability was assessed by computing the inter-respondent concordance. The DABS convergent validity coefficients ranged from 0.70 to 0.84, while the test-retest reliability coefficients ranged from 0.78 to 0.95, and the inter-rater concordance as measured by intraclass correlation coefficients ranged from 0.61 to 0.87. All obtained validity and reliability indicators were strong and comparable with the validity and reliability coefficients of the most commonly used adaptive behaviour instruments. These results and the advantages of the DABS for clinician and researcher use are discussed. © 2015 MENCAP and International Association of the Scientific Study of Intellectual and Developmental Disabilities and John Wiley & Sons Ltd.

  2. Results of Investigative Tests of Gas Turbine Engine Compressor Blades Obtained by Electrochemical Machining

    NASA Astrophysics Data System (ADS)

    Kozhina, T. D.; Kurochkin, A. V.

    2016-04-01

    The paper highlights results of the investigative tests of GTE compressor Ti-alloy blades obtained by the method of electrochemical machining with oscillating tool-electrodes, carried out in order to define the optimal parameters of the ECM process providing attainment of specified blade quality parameters given in the design documentation, while providing maximal performance. The new technological methods suggested based on the results of the tests; in particular application of vibrating tool-electrodes and employment of locating elements made of high-strength materials, significantly extend the capabilities of this method.

  3. Non-Nuclear Validation Test Results of a Closed Brayton Cycle Test-Loop

    NASA Astrophysics Data System (ADS)

    Wright, Steven A.

    2007-01-01

    Both NASA and DOE have programs that are investigating advanced power conversion cycles for planetary surface power on the moon or Mars, or for next generation nuclear power plants on earth. Although open Brayton cycles are in use for many applications (combined cycle power plants, aircraft engines), only a few closed Brayton cycles have been tested. Experience with closed Brayton cycles coupled to nuclear reactors is even more limited and current projections of Brayton cycle performance are based on analytic models. This report describes and compares experimental results with model predictions from a series of non-nuclear tests using a small scale closed loop Brayton cycle available at Sandia National Laboratories. A substantial amount of testing has been performed, and the information is being used to help validate models. In this report we summarize the results from three kinds of tests. These tests include: 1) test results that are useful for validating the characteristic flow curves of the turbomachinery for various gases ranging from ideal gases (Ar or Ar/He) to non-ideal gases such as CO2, 2) test results that represent shut down transients and decay heat removal capability of Brayton loops after reactor shut down, and 3) tests that map a range of operating power versus shaft speed curve and turbine inlet temperature that are useful for predicting stable operating conditions during both normal and off-normal operating behavior. These tests reveal significant interactions between the reactor and balance of plant. Specifically these results predict limited speed up behavior of the turbomachinery caused by loss of load, the conditions for stable operation, and for direct cooled reactors, the tests reveal that the coast down behavior during loss of power events can extend for hours provided the ultimate heat sink remains available.

  4. An innovative recycling process to obtain pure polyethylene and polypropylene from household waste.

    PubMed

    Serranti, Silvia; Luciani, Valentina; Bonifazi, Giuseppe; Hu, Bin; Rem, Peter C

    2015-01-01

    An innovative recycling process, based on magnetic density separation (MDS) and hyperspectral imaging (HSI), to obtain high quality polypropylene and polyethylene as secondary raw materials, is presented. More in details, MDS was applied to two different polyolefin mixtures coming from household waste. The quality of the two separated PP and PE streams, in terms of purity, was evaluated by a classification procedure based on HSI working in the near infrared range (1000-1700 nm). The classification model was built using known PE and PP samples as training set. The results obtained by HSI were compared with those obtained by classical density analysis carried in laboratory on the same polymers. The results obtained by MDS and the quality assessment of the plastic products by HSI showed that the combined action of these two technologies is a valid solution that can be implemented at industrial level. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Legionella in water samples: how can you interpret the results obtained by quantitative PCR?

    PubMed

    Ditommaso, Savina; Ricciardi, Elisa; Giacomuzzi, Monica; Arauco Rivera, Susan R; Zotti, Carla M

    2015-02-01

    Evaluation of the potential risk associated with Legionella has traditionally been determined from culture-based methods. Quantitative polymerase chain reaction (qPCR) is an alternative tool that offers rapid, sensitive and specific detection of Legionella in environmental water samples. In this study we compare the results obtained by conventional qPCR (iQ-Check™ Quanti Legionella spp.; Bio-Rad) and by culture method on artificial samples prepared in Page's saline by addiction of Legionella pneumophila serogroup 1 (ATCC 33152) and we analyse the selective quantification of viable Legionella cells by the qPCR-PMA method. The amount of Legionella DNA (GU) determined by qPCR was 28-fold higher than the load detected by culture (CFU). Applying the qPCR combined with PMA treatment we obtained a reduction of 98.5% of the qPCR signal from dead cells. We observed a dissimilarity in the ability of PMA to suppress the PCR signal in samples with different amounts of bacteria: the effective elimination of detection signals by PMA depended on the concentration of GU and increasing amounts of cells resulted in higher values of reduction. Using the results from this study we created an algorithm to facilitate the interpretation of viable cell level estimation with qPCR-PMA. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. An assessment of consistence of exhaust gas emission test results obtained under controlled NEDC conditions

    NASA Astrophysics Data System (ADS)

    Balawender, K.; Jaworski, A.; Kuszewski, H.; Lejda, K.; Ustrzycki, A.

    2016-09-01

    Measurements concerning emissions of pollutants contained in automobile combustion engine exhaust gases is of primary importance in view of their harmful impact on the natural environment. This paper presents results of tests aimed at determining exhaust gas pollutant emissions from a passenger car engine obtained under repeatable conditions on a chassis dynamometer. The test set-up was installed in a controlled climate chamber allowing to maintain the temperature conditions within the range from -20°C to +30°C. The analysis covered emissions of such components as CO, CO2, NOx, CH4, THC, and NMHC. The purpose of the study was to assess repeatability of results obtained in a number of tests performed as per NEDC test plan. The study is an introductory stage of a wider research project concerning the effect of climate conditions and fuel type on emission of pollutants contained in exhaust gases generated by automotive vehicles.

  7. How well do adolescents recall use of mobile telephones? Results of a validation study

    PubMed Central

    2009-01-01

    Background In the last decade mobile telephone use has become more widespread among children. Concerns expressed about possible health risks have led to epidemiological studies investigating adverse health outcomes associated with mobile telephone use. Most epidemiological studies have relied on self reported questionnaire responses to determine individual exposure. We sought to validate the accuracy of self reported adolescent mobile telephone use. Methods Participants were recruited from year 7 secondary school students in Melbourne, Australia. Adolescent recall of mobile telephone use was assessed using a self administered questionnaire which asked about number and average duration of calls per week. Validation of self reports was undertaken using Software Modified Phones (SMPs) which logged exposure details such as number and duration of calls. Results A total of 59 adolescents participated (39% boys, 61% girls). Overall a modest but significant rank correlation was found between self and validated number of voice calls (ρ = 0.3, P = 0.04) with a sensitivity of 57% and specificity of 66%. Agreement between SMP measured and self reported duration of calls was poorer (ρ = 0.1, P = 0.37). Participants whose parents belonged to the 4th socioeconomic stratum recalled mobile phone use better than others (ρ = 0.6, P = 0.01). Conclusion Adolescent recall of mobile telephone use was only modestly accurate. Caution is warranted in interpreting results of epidemiological studies investigating health effects of mobile phone use in this age group. PMID:19523193

  8. Glucose Meters: A Review of Technical Challenges to Obtaining Accurate Results

    PubMed Central

    Tonyushkina, Ksenia; Nichols, James H.

    2009-01-01

    , anemia, hypotension, and other disease states. This article reviews the challenges involved in obtaining accurate glucose meter results. PMID:20144348

  9. Design and validation of an automated hydrostatic weighing system.

    PubMed

    McClenaghan, B A; Rocchio, L

    1986-08-01

    The purpose of this study was to design and evaluate the validity of an automated technique to assess body density using a computerized hydrostatic weighing system. An existing hydrostatic tank was modified and interfaced with a microcomputer equipped with an analog-to-digital converter. Software was designed to input variables, control the collection of data, calculate selected measurements, and provide a summary of the results of each session. Validity of the data obtained utilizing the automated hydrostatic weighing system was estimated by: evaluating the reliability of the transducer/computer interface to measure objects of known underwater weight; comparing the data against a criterion measure; and determining inter-session subject reliability. Values obtained from the automated system were found to be highly correlated with known underwater weights (r = 0.99, SEE = 0.0060 kg). Data concurrently obtained utilizing the automated system and a manual chart recorder were also found to be highly correlated (r = 0.99, SEE = 0.0606 kg). Inter-session subject reliability was determined utilizing data collected on subjects (N = 16) tested on two occasions approximately 24 h apart. Correlations revealed high relationships between measures of underwater weight (r = 0.99, SEE = 0.1399 kg) and body density (r = 0.98, SEE = 0.00244 g X cm-1). Results indicate that a computerized hydrostatic weighing system is a valid and reliable method for determining underwater weight.

  10. Validation of ELDO approaches for retrospective assessment of cumulative eye lens doses of interventional cardiologists-results from DoReMi project.

    PubMed

    Domienik, J; Farah, J; Struelens, L

    2016-12-01

    The first validation results of the two approaches developed in the ELDO project for retrospective assessment of eye lens doses for interventional cardiologists (ICs) are presented in this paper. The first approach (a) is based on both the readings from the routine whole body dosimeter worn above the lead apron and procedure-dependent conversion coefficients, while the second approach (b) is based on detailed information related to the occupational exposure history of the ICs declared in a questionnaire and eye lens dose records obtained from the relevant literature. The latter approach makes use of various published eye lens doses per procedure as well as the appropriate correction factors which account for the use of radiation protective tools designed to protect the eye lens. To validate both methodologies, comprehensive measurements were performed in several Polish clinics among recruited physicians. Two dosimeters measuring whole body and eye lens doses were worn by every physician for at least two months. The estimated cumulative eye lens doses, calculated from both approaches, were then compared against the measured eye lens dose value for every physician separately. Both approaches results in comparable estimates of eye lens doses and tend to overestimate rather than underestimate the eye lens doses. The measured and estimated doses do not differ, on average, by a factor higher than 2.0 in 85% and 62% of the cases used to validate approach (a) and (b), respectively. In specific cases, however, the estimated doses differ from the measured ones by as much as a factor of 2.7 and 5.1 for method (a) and (b), respectively. As such, the two approaches can be considered accurate when retrospectively estimating the eye lens doses for ICs and will be of great benefit for ongoing epidemiological studies.

  11. 49 CFR 40.160 - What does the MRO do when a valid test result cannot be produced and a negative result is required?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 1 2013-10-01 2013-10-01 false What does the MRO do when a valid test result cannot be produced and a negative result is required? 40.160 Section 40.160 Transportation Office of the Secretary of Transportation PROCEDURES FOR TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Medical Review Officers and the Verification...

  12. Apar-T: code, validation, and physical interpretation of particle-in-cell results

    NASA Astrophysics Data System (ADS)

    Melzani, Mickaël; Winisdoerffer, Christophe; Walder, Rolf; Folini, Doris; Favre, Jean M.; Krastanov, Stefan; Messmer, Peter

    2013-10-01

    We present the parallel particle-in-cell (PIC) code Apar-T and, more importantly, address the fundamental question of the relations between the PIC model, the Vlasov-Maxwell theory, and real plasmas. First, we present four validation tests: spectra from simulations of thermal plasmas, linear growth rates of the relativistic tearing instability and of the filamentation instability, and nonlinear filamentation merging phase. For the filamentation instability we show that the effective growth rates measured on the total energy can differ by more than 50% from the linear cold predictions and from the fastest modes of the simulation. We link these discrepancies to the superparticle number per cell and to the level of field fluctuations. Second, we detail a new method for initial loading of Maxwell-Jüttner particle distributions with relativistic bulk velocity and relativistic temperature, and explain why the traditional method with individual particle boosting fails. The formulation of the relativistic Harris equilibrium is generalized to arbitrary temperature and mass ratios. Both are required for the tearing instability setup. Third, we turn to the key point of this paper and scrutinize the question of what description of (weakly coupled) physical plasmas is obtained by PIC models. These models rely on two building blocks: coarse-graining, i.e., grouping of the order of p ~ 1010 real particles into a single computer superparticle, and field storage on a grid with its subsequent finite superparticle size. We introduce the notion of coarse-graining dependent quantities, i.e., quantities depending on p. They derive from the PIC plasma parameter ΛPIC, which we show to behave as ΛPIC ∝ 1/p. We explore two important implications. One is that PIC collision- and fluctuation-induced thermalization times are expected to scale with the number of superparticles per grid cell, and thus to be a factor p ~ 1010 smaller than in real plasmas, a fact that we confirm with

  13. Validation of a clinical critical thinking skills test in nursing

    PubMed Central

    2015-01-01

    Purpose: The purpose of this study was to develop a revised version of the clinical critical thinking skills test (CCTS) and to subsequently validate its performance. Methods: This study is a secondary analysis of the CCTS. Data were obtained from a convenience sample of 284 college students in June 2011. Thirty items were analyzed using item response theory and test reliability was assessed. Test-retest reliability was measured using the results of 20 nursing college and graduate school students in July 2013. The content validity of the revised items was analyzed by calculating the degree of agreement between instrument developer intention in item development and the judgments of six experts. To analyze response process validity, qualitative data related to the response processes of nine nursing college students obtained through cognitive interviews were analyzed. Results: Out of initial 30 items, 11 items were excluded after the analysis of difficulty and discrimination parameter. When the 19 items of the revised version of the CCTS were analyzed, levels of item difficulty were found to be relatively low and levels of discrimination were found to be appropriate or high. The degree of agreement between item developer intention and expert judgments equaled or exceeded 50%. Conclusion: From above results, evidence of the response process validity was demonstrated, indicating that subjects respondeds as intended by the test developer. The revised 19-item CCTS was found to have sufficient reliability and validity and will therefore represents a more convenient measurement of critical thinking ability. PMID:25622716

  14. VALFAST: Secure Probabilistic Validation of Hundreds of Kepler Planet Candidates

    NASA Astrophysics Data System (ADS)

    Morton, Tim; Petigura, E.; Johnson, J. A.; Howard, A.; Marcy, G. W.; Baranec, C.; Law, N. M.; Riddle, R. L.; Ciardi, D. R.; Robo-AO Team

    2014-01-01

    The scope, scale, and tremendous success of the Kepler mission has necessitated the rapid development of probabilistic validation as a new conceptual framework for analyzing transiting planet candidate signals. While several planet validation methods have been independently developed and presented in the literature, none has yet come close to addressing the entire Kepler survey. I present the results of applying VALFAST---a planet validation code based on the methodology described in Morton (2012)---to every Kepler Object of Interest. VALFAST is unique in its combination of detail, completeness, and speed. Using the transit light curve shape, realistic population simulations, and (optionally) diverse follow-up observations, it calculates the probability that a transit candidate signal is the result of a true transiting planet or any of a number of astrophysical false positive scenarios, all in just a few minutes on a laptop computer. In addition to efficiently validating the planetary nature of hundreds of new KOIs, this broad application of VALFAST also demonstrates its ability to reliably identify likely false positives. This extensive validation effort is also the first to incorporate data from all of the largest Kepler follow-up observing efforts: the CKS survey of ~1000 KOIs with Keck/HIRES, the Robo-AO survey of >1700 KOIs, and high-resolution images obtained through the Kepler Follow-up Observing Program. In addition to enabling the core science that the Kepler mission was designed for, this methodology will be critical to obtain statistical results from future surveys such as TESS and PLATO.

  15. Airglow during ionospheric modifications by the sura facility radiation. experimental results obtained in 2010

    NASA Astrophysics Data System (ADS)

    Grach, S. M.; Klimenko, V. V.; Shindin, A. V.; Nasyrov, I. A.; Sergeev, E. N.; A. Yashnov, V.; A. Pogorelko, N.

    2012-06-01

    We present the results of studying the structure and dynamics of the HF-heated volume above the Sura facility obtained in 2010 by measurements of ionospheric airglow in the red (λ = 630 nm) and green (λ = 557.7 nm) lines of atomic oxygen. Vertical sounding of the ionosphere (followed by modeling of the pump-wave propagation) and measurements of stimulated electromagnetic emission were used for additional diagnostics of ionospheric parameters and the processes occurring in the heated volume.

  16. Urdu translation of the Hamilton Rating Scale for Depression: Results of a validation study

    PubMed Central

    Hashmi, Ali M.; Naz, Shahana; Asif, Aftab; Khawaja, Imran S.

    2016-01-01

    Objective: To develop a standardized validated version of the Hamilton Rating Scale for Depression (HAM-D) in Urdu. Methods: After translation of the HAM-D into the Urdu language following standard guidelines, the final Urdu version (HAM-D-U) was administered to 160 depressed outpatients. Inter-item correlation was assessed by calculating Cronbach alpha. Correlation between HAM-D-U scores at baseline and after a 2-week interval was evaluated for test-retest reliability. Moreover, scores of two clinicians on HAM-D-U were compared for inter-rater reliability. For establishing concurrent validity, scores of HAM-D-U and BDI-U were compared by using Spearman correlation coefficient. The study was conducted at Mayo Hospital, Lahore, from May to December 2014. Results: The Cronbach alpha for HAM-D-U was 0.71. Composite scores for HAM-D-U at baseline and after a 2-week interval were also highly correlated with each other (Spearman correlation coefficient 0.83, p-value < 0.01) indicating good test-retest reliability. Composite scores for HAM-D-U and BDI-U were positively correlated with each other (Spearman correlation coefficient 0.85, p < 0.01) indicating good concurrent validity. Scores of two clinicians for HAM-D-U were also positively correlated (Spearman correlation coefficient 0.82, p-value < 0.01) indicated good inter-rater reliability. Conclusion: The HAM-D-U is a valid and reliable instrument for the assessment of Depression. It shows good inter-rater and test-retest reliability. The HAM-D-U can be a tool either for clinical management or research. PMID:28083049

  17. 42 CFR 456.655 - Validation of showings.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Administrator will not find an agency's showing satisfactory if the information obtained through his validation... 42 Public Health 4 2010-10-01 2010-10-01 false Validation of showings. 456.655 Section 456.655... Showing of an Effective Institutional Utilization Control Program § 456.655 Validation of showings. (a...

  18. Validity of Sensory Systems as Distinct Constructs

    PubMed Central

    Su, Chia-Ting

    2014-01-01

    This study investigated the validity of sensory systems as distinct measurable constructs as part of a larger project examining Ayres’s theory of sensory integration. Confirmatory factor analysis (CFA) was conducted to test whether sensory questionnaire items represent distinct sensory system constructs. Data were obtained from clinical records of two age groups, 2- to 5-yr-olds (n = 231) and 6- to 10-yr-olds (n = 223). With each group, we tested several CFA models for goodness of fit with the data. The accepted model was identical for each group and indicated that tactile, vestibular–proprioceptive, visual, and auditory systems form distinct, valid factors that are not age dependent. In contrast, alternative models that grouped items according to sensory processing problems (e.g., over- or underresponsiveness within or across sensory systems) did not yield valid factors. Results indicate that distinct sensory system constructs can be measured validly using questionnaire data. PMID:25184467

  19. Validation of the Intrinsic Spirituality Scale (ISS) with Muslims.

    PubMed

    Hodge, David R; Zidan, Tarek; Husain, Altaf

    2015-12-01

    This study validates an existing spirituality measure--the intrinsic spirituality scale (ISS)--for use with Muslims in the United States. A confirmatory factor analysis was conducted with a diverse sample of self-identified Muslims (N = 281). Validity and reliability were assessed along with criterion and concurrent validity. The measurement model fit the data well, normed χ2 = 2.50, CFI = 0.99, RMSEA = 0.07, and SRMR = 0.02. All 6 items that comprise the ISS demonstrated satisfactory levels of validity (λ > .70) and reliability (R2 > .50). The Cronbach's alpha obtained with the present sample was .93. Appropriate correlations with theoretically linked constructs demonstrated criterion and concurrent validity. The results suggest the ISS is a valid measure of spirituality in clinical settings with the rapidly growing Muslim population. The ISS may, for instance, provide an efficient screening tool to identify Muslims that are particularly likely to benefit from spiritually accommodative treatments. (c) 2015 APA, all rights reserved).

  20. Validation of a clinical critical thinking skills test in nursing.

    PubMed

    Shin, Sujin; Jung, Dukyoo; Kim, Sungeun

    2015-01-27

    The purpose of this study was to develop a revised version of the clinical critical thinking skills test (CCTS) and to subsequently validate its performance. This study is a secondary analysis of the CCTS. Data were obtained from a convenience sample of 284 college students in June 2011. Thirty items were analyzed using item response theory and test reliability was assessed. Test-retest reliability was measured using the results of 20 nursing college and graduate school students in July 2013. The content validity of the revised items was analyzed by calculating the degree of agreement between instrument developer intention in item development and the judgments of six experts. To analyze response process validity, qualitative data related to the response processes of nine nursing college students obtained through cognitive interviews were analyzed. Out of initial 30 items, 11 items were excluded after the analysis of difficulty and discrimination parameter. When the 19 items of the revised version of the CCTS were analyzed, levels of item difficulty were found to be relatively low and levels of discrimination were found to be appropriate or high. The degree of agreement between item developer intention and expert judgments equaled or exceeded 50%. From above results, evidence of the response process validity was demonstrated, indicating that subjects respondeds as intended by the test developer. The revised 19-item CCTS was found to have sufficient reliability and validity and will therefore represents a more convenient measurement of critical thinking ability.

  1. Two Validated Ways of Improving the Ability of Decision-Making in Emergencies; Results from a Literature Review

    PubMed Central

    Khorram-Manesh, Amir; Berlin, Johan; Carlström, Eric

    2016-01-01

    The aim of the current review wasto study the existing knowledge about decision-making and to identify and describe validated training tools.A comprehensive literature review was conducted by using the following keywords: decision-making, emergencies, disasters, crisis management, training, exercises, simulation, validated, real-time, command and control, communication, collaboration, and multi-disciplinary in combination or as an isolated word. Two validated training systems developed in Sweden, 3 level collaboration (3LC) and MacSim, were identified and studied in light of the literature review in order to identify how decision-making can be trained. The training models fulfilled six of the eight identified characteristics of training for decision-making.Based on the results, these training models contained methods suitable to train for decision-making. PMID:27878123

  2. Comparison of orbital volume obtained by tomography and rapid prototyping.

    PubMed

    Roça, Guilherme Berto; Foggiatto, José Aguiomar; Ono, Maria Cecilia Closs; Ono, Sergio Eiji; da Silva Freitas, Renato

    2013-11-01

    This study aims to compare orbital volume obtained by helical tomography and rapid prototyping. The study sample was composed of 6 helical tomography scans. Eleven healthy orbits were identified to have their volumes measured. The volumetric analysis with the helical tomography utilized the same protocol developed by the Plastic Surgery Unit of the Federal University of Paraná. From the CT images, 11 prototypes were created, and their respective volumes were analyzed in 2 ways: using software by SolidWorks and by direct analysis, when the prototype was filled with saline solution. For statistical analysis, the results of the volumes of the 11 orbits were considered independent. The average orbital volume measurements obtained by the method of Ono et al was 20.51 cm, the average obtained by the SolidWorks program was 20.64 cm, and the average measured using the prototype method was 21.81 cm. The 3 methods demonstrated a strong correlation between the measurements. The right and left orbits of each patient had similar volumes. The tomographic method for the analysis of orbital volume using the Ono protocol yielded consistent values, and by combining this method with rapid prototyping, both reliability validations of results were enhanced.

  3. Validation of the Classroom Behavior Inventory

    ERIC Educational Resources Information Center

    Blunden, Dale; And Others

    1974-01-01

    Factor-analytic methods were used toassess contruct validity of the Classroom Behavior Inventory, a scale for rating behaviors associated with hyperactivity. The Classroom Behavior Inventory measures three dimensions of behavior: Hyperactivity, Hostility, and Sociability. Significant concurrent validity was obtained for only one Classroom Behavior…

  4. Social Skills Questionnaire for Argentinean College Students (SSQ-U) Development and Validation.

    PubMed

    Morán, Valeria E; Olaz, Fabián O; Del Prette, Zilda A P

    2015-11-27

    In this paper we present a new instrument called Social Skills Questionnaire for Argentinean College Students (SSQ-U). Based on the adapted version of the Social Skills Inventory - Del Prette (SSI-Del Prette) (Olaz, Medrano, Greco, & Del Prette, 2009), we wrote new items for the scale, and carried out psychometric analysis to assess the validity and reliability of the instrument. In the first study, we collected evidence based on test content through expert judges who evaluated the quality and the relevance of the items. In the second and third studies, we provided validity evidence based on the internal structure of the instrument using exploratory (n = 1067) and confirmatory (n = 661) factor analysis. Results suggested a five-factor structure consistent with the dimensions of social skills, as proposed by Kelly (2002). The fit indexes corresponding to the obtained model were adequate, and composite reliability coefficients of each factor were excellent (above .75). Finally, in the fourth study, we provided evidence of convergent and discriminant validity. The obtained results allow us to conclude that the SSQ-U is the first valid and reliable instrument for measuring social skills in Argentinean college students.

  5. Validity of dietary recall over 20 years among California Seventh-day Adventists.

    PubMed

    Fraser, G E; Lindsted, K D; Knutsen, S F; Beeson, W L; Bennett, H; Shavlik, D J

    1998-10-15

    Past dietary habits are etiologically important to incident disease. Yet the validity of such measurements from the previous 10-20 years is poorly understood. In this study, the authors correlated food frequency results that were obtained in 1994-1995 but pertained to recalled diet in 1974 with the weighted mean of five random 24-hour dietary recalls obtained by telephone in 1974. The subjects studied were 72 Seventh-day Adventists who lived within 30 miles of Loma Linda, California; had participated in a 1974 validation study; were still alive; and were willing to participate again in 1994. A method was developed to allow correction for random error in the reference data when these data had differentially weighted components. The results showed partially corrected correlation coefficients of greater than 0.30 for coffee, whole milk, eggs, chips, beef, fish, chicken, fruit, and legumes. Higher correlations on average were obtained when the food frequencies were scored simply 1-9, reflecting the nine frequency categories. The 95% confidence intervals for 15 of the 28 correlations excluded zero. Incorporation of portion size information was unhelpful. The authors concluded that in this population, data recalled from 20 years ago should be treated with caution but, for a number of important foods, that the degree of validity achieved approached that obtained when assessing current dietary habits.

  6. Validation of a robust proteomic analysis carried out on formalin-fixed paraffin-embedded tissues of the pancreas obtained from mouse and human.

    PubMed

    Kojima, Kyoko; Bowersock, Gregory J; Kojima, Chinatsu; Klug, Christopher A; Grizzle, William E; Mobley, James A

    2012-11-01

    A number of reports have recently emerged with focus on extraction of proteins from formalin-fixed paraffin-embedded (FFPE) tissues for MS analysis; however, reproducibility and robustness as compared to flash frozen controls is generally overlooked. The goal of this study was to identify and validate a practical and highly robust approach for the proteomics analysis of FFPE tissues. FFPE and matched frozen pancreatic tissues obtained from mice (n = 8) were analyzed using 1D-nanoLC-MS(MS)(2) following work up with commercially available kits. The chosen approach for FFPE tissues was found to be highly comparable to that of frozen. In addition, the total number of unique peptides identified between the two groups was highly similar, with 958 identified for FFPE and 1070 identified for frozen, with protein identifications that corresponded by approximately 80%. This approach was then applied to archived human FFPE pancreatic cancer specimens (n = 11) as compared to uninvolved tissues (n = 8), where 47 potential pancreatic ductal adenocarcinoma markers were identified as significantly increased, of which 28 were previously reported. Further, these proteins share strongly overlapping pathway associations to pancreatic cancer that include estrogen receptor α. Together, these data support the validation of an approach for the proteomic analysis of FFPE tissues that is straightforward and highly robust, which can also be effectively applied toward translational studies of disease. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Multi-Evaporator Miniature Loop Heat Pipe for Small Spacecraft Thermal Control. Part 2; Validation Results

    NASA Technical Reports Server (NTRS)

    Ku, Jentung; Ottenstein, Laura; Douglas, Donya; Hoang, Triem

    2010-01-01

    Under NASA s New Millennium Program Space Technology 8 (ST 8) Project, Goddard Space Fight Center has conducted a Thermal Loop experiment to advance the maturity of the Thermal Loop technology from proof of concept to prototype demonstration in a relevant environment , i.e. from a technology readiness level (TRL) of 3 to a level of 6. The thermal Loop is an advanced thermal control system consisting of a miniature loop heat pipe (MLHP) with multiple evaporators and multiple condensers designed for future small system applications requiring low mass, low power, and compactness. The MLHP retains all features of state-of-the-art loop heat pipes (LHPs) and offers additional advantages to enhance the functionality, performance, versatility, and reliability of the system. An MLHP breadboard was built and tested in the laboratory and thermal vacuum environments for the TRL 4 and TRL 5 validations, respectively, and an MLHP proto-flight unit was built and tested in a thermal vacuum chamber for the TRL 6 validation. In addition, an analytical model was developed to simulate the steady state and transient behaviors of the MLHP during various validation tests. The MLHP demonstrated excellent performance during experimental tests and the analytical model predictions agreed very well with experimental data. All success criteria at various TRLs were met. Hence, the Thermal Loop technology has reached a TRL of 6. This paper presents the validation results, both experimental and analytical, of such a technology development effort.

  8. Validation of recent geopotential models in Tierra Del Fuego

    NASA Astrophysics Data System (ADS)

    Gomez, Maria Eugenia; Perdomo, Raul; Del Cogliano, Daniel

    2017-10-01

    This work presents a validation study of global geopotential models (GGM) in the region of Fagnano Lake, located in the southern Andes. This is an excellent area for this type of validation because it is surrounded by the Andes Mountains, and there is no terrestrial gravity or GNSS/levelling data. However, there are mean lake level (MLL) observations, and its surface is assumed to be almost equipotential. Furthermore, in this article, we propose improved geoid solutions through the Residual Terrain Modelling (RTM) approach. Using a global geopotential model, the results achieved allow us to conclude that it is possible to use this technique to extend an existing geoid model to those regions that lack any information (neither gravimetric nor GNSS/levelling observations). As GGMs have evolved, our results have improved progressively. While the validation of EGM2008 with MLL data shows a standard deviation of 35 cm, GOCO05C shows a deviation of 13 cm, similar to the results obtained on land.

  9. Computationally efficient confidence intervals for cross-validated area under the ROC curve estimates.

    PubMed

    LeDell, Erin; Petersen, Maya; van der Laan, Mark

    In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC.

  10. Computationally efficient confidence intervals for cross-validated area under the ROC curve estimates

    PubMed Central

    Petersen, Maya; van der Laan, Mark

    2015-01-01

    In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC. PMID:26279737

  11. Comparison of the Calculations Results of Heat Exchange Between a Single-Family Building and the Ground Obtained with the Quasi-Stationary and 3-D Transient Models. Part 2: Intermittent and Reduced Heating Mode

    NASA Astrophysics Data System (ADS)

    Staszczuk, Anna

    2017-03-01

    The paper provides comparative results of calculations of heat exchange between ground and typical residential buildings using simplified (quasi-stationary) and more accurate (transient, three-dimensional) methods. Such characteristics as building's geometry, basement hollow and construction of ground touching assemblies were considered including intermittent and reduced heating mode. The calculations with simplified methods were conducted in accordance with currently valid norm: PN-EN ISO 13370:2008. Thermal performance of buildings. Heat transfer via the ground. Calculation methods. Comparative estimates concerning transient, 3-D, heat flow were performed with computer software WUFI®plus. The differences of heat exchange obtained using more exact and simplified methods have been specified as a result of the analysis.

  12. Validated liquid chromatographic method and analysis of content of tilianin on several extracts obtained from Agastache mexicana and its correlation with vasorelaxant effect.

    PubMed

    Hernández-Abreu, Oswaldo; Durán-Gómez, Liliana; Best-Brown, Roberto; Villalobos-Molina, Rafael; Rivera-Leyva, Julio; Estrada-Soto, Samuel

    2011-11-18

    To optimize the obtention of tilianin, an antihypertensive flavonoid isolated from Agastache mexicana (Lamiaceae), a medicinal plant used in Mexico for the treatment of hypertension. Also, a validated HPLC method to quantify tilianin from different extracts, obtained by several extraction methods, was developed. The aerial parts of Agastache mexicana were dried at different temperatures (22, 40, 50, 90, 100 and 180°C) and the dry material was extracted with methanol by maceration to compare the content of the active constituent tilianin in the samples. Furthermore, EtOH:H(2)O (7:3), infusion and decoction extracts were prepared from air-dried samples at room temperature to compare the content and composition of the different extraction methods. Moreover, an ex vivo vasorelaxant test on endothelium-intact aortic rat rings was conducted, in order to correlate the presence of tilianin with the activity of each extract. Higher concentration and amounts of tilianin were determined from chromatograms in the obtained methanolic extracts from plant material dried at 90, 50, 40 and 22°C, followed by 100°C; however, lower concentrations were observed in dried at 180°C and EtOH:H(2)O (7:3). It is worth to notice that methanolic extracts with higher amount of tilianin were the most potent vasorelaxant extracts, even though these extracts were less potent than carbachol, a positive control used. Finally, decoction, infusion and EtOH:H(2)O (7:3) extracts did not show any vasorelaxant effect. Results suggest that extracts with higher concentration of tilianin possess the best vasorelaxant activity, which allowed us to have a HPLC method for future quality control for this medicinal plant. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  13. Hopes and Cautions for Instrument-Based Evaluation of Consent Capacity: Results of a Construct Validity Study of Three Instruments

    PubMed Central

    Moye, Jennifer; Azar, Annin R.; Karel, Michele J.; Gurrera, Ronald J.

    2016-01-01

    Does instrument based evaluation of consent capacity increase the precision and validity of competency assessment or does ostensible precision provide a false sense of confidence without in fact improving validity? In this paper we critically examine the evidence for construct validity of three instruments for measuring four functional abilities important in consent capacity: understanding, appreciation, reasoning, and expressing a choice. Instrument based assessment of these abilities is compared through investigation of a multi-trait multi-method matrix in 88 older adults with mild to moderate dementia. Results find variable support for validity. There appears to be strong evidence for good hetero-method validity for the measurement of understanding, mixed evidence for validity in the measurement of reasoning, and strong evidence for poor hetero-method validity for the concepts of appreciation and expressing a choice, although the latter is likely due to extreme range restrictions. The development of empirically based tools for use in capacity evaluation should ultimately enhance the reliability and validity of assessment, yet clearly more research is needed to define and measure the constructs of decisional capacity. We would also emphasize that instrument based assessment of capacity is only one part of a comprehensive evaluation of competency which includes consideration of diagnosis, psychiatric and/or cognitive symptomatology, risk involved in the situation, and individual and cultural differences. PMID:27330455

  14. Item validity vs. item discrimination index: a redundancy?

    NASA Astrophysics Data System (ADS)

    Panjaitan, R. L.; Irawati, R.; Sujana, A.; Hanifah, N.; Djuanda, D.

    2018-03-01

    In several literatures about evaluation and test analysis, it is common to find that there are calculations of item validity as well as item discrimination index (D) with different formula for each. Meanwhile, other resources said that item discrimination index could be obtained by calculating the correlation between the testee’s score in a particular item and the testee’s score on the overall test, which is actually the same concept as item validity. Some research reports, especially undergraduate theses tend to include both item validity and item discrimination index in the instrument analysis. It seems that these concepts might overlap for both reflect the test quality on measuring the examinees’ ability. In this paper, examples of some results of data processing on item validity and item discrimination index were compared. It would be discussed whether item validity and item discrimination index can be represented by one of them only or it should be better to present both calculations for simple test analysis, especially in undergraduate theses where test analyses were included.

  15. Soil Moisture Active Passive Satellite Status and Recent Validation Results

    USDA-ARS?s Scientific Manuscript database

    The Soil Moisture Active Passive (SMAP) mission was launched in January, 2015 and began its calibration and validation (cal/val) phase in May, 2015. Cal/Val will begin with a focus on instrument measurements, brightness temperature and backscatter, and evolve to the geophysical products that include...

  16. Simulated Driving Assessment (SDA) for teen drivers: results from a validation study.

    PubMed

    McDonald, Catherine C; Kandadai, Venk; Loeb, Helen; Seacrist, Thomas S; Lee, Yi-Ching; Winston, Zachary; Winston, Flaura K

    2015-06-01

    Driver error and inadequate skill are common critical reasons for novice teen driver crashes, yet few validated, standardised assessments of teen driving skills exist. The purpose of this study is to evaluate the construct and criterion validity of a newly developed Simulated Driving Assessment (SDA) for novice teen drivers. The SDA's 35 min simulated drive incorporates 22 variations of the most common teen driver crash configurations. Driving performance was compared for 21 inexperienced teens (age 16-17 years, provisional license ≤90 days) and 17 experienced adults (age 25-50 years, license ≥5 years, drove ≥100 miles per week, no collisions or moving violations ≤3 years). SDA driving performance (Error Score) was based on driving safety measures derived from simulator and eye-tracking data. Negative driving outcomes included simulated collisions or run-off-the-road incidents. A professional driving evaluator/instructor (DEI Score) reviewed videos of SDA performance. The SDA demonstrated construct validity: (1) teens had a higher Error Score than adults (30 vs. 13, p=0.02); (2) For each additional error committed, the RR of a participant's propensity for a simulated negative driving outcome increased by 8% (95% CI 1.05 to 1.10, p<0.01). The SDA-demonstrated criterion validity: Error Score was correlated with DEI Score (r=-0.66, p<0.001). This study supports the concept of validated simulated driving tests like the SDA to assess novice driver skill in complex and hazardous driving scenarios. The SDA, as a standard protocol to evaluate teen driver performance, has the potential to facilitate screening and assessment of teen driving readiness and could be used to guide targeted skill training. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  17. Uncertainties and understanding of experimental and theoretical results regarding reactions forming heavy and superheavy nuclei

    NASA Astrophysics Data System (ADS)

    Giardina, G.; Mandaglio, G.; Nasirov, A. K.; Anastasi, A.; Curciarello, F.; Fazio, G.

    2018-02-01

    Experimental and theoretical results of the PCN fusion probability of reactants in the entrance channel and the Wsur survival probability against fission at deexcitation of the compound nucleus formed in heavy-ion collisions are discussed. The theoretical results for a set of nuclear reactions leading to formation of compound nuclei (CNs) with the charge number Z = 102- 122 reveal a strong sensitivity of PCN to the characteristics of colliding nuclei in the entrance channel, dynamics of the reaction mechanism, and excitation energy of the system. We discuss the validity of assumptions and procedures for analysis of experimental data, and also the limits of validity of theoretical results obtained by the use of phenomenological models. The comparison of results obtained in many investigated reactions reveals serious limits of validity of the data analysis and calculation procedures.

  18. Optimization of the parameters for obtaining zirconia-alumina coatings, made by flame spraying from results of numerical simulation

    NASA Astrophysics Data System (ADS)

    Ferrer, M.; Vargas, F.; Peña, G.

    2017-12-01

    The K-Sommerfeld values (K) and the melting percentage (% F) obtained by numerical simulation using the Jets et Poudres software were used to find the projection parameters of zirconia-alumina coatings by thermal spraying flame, in order to obtain coatings with good morphological and structural properties to be used as thermal insulation. The experimental results show the relationship between the Sommerfeld parameter and the porosity of the zirconia-alumina coatings. It is found that the lowest porosity is obtained when the K-Sommerfeld value is close to 45 with an oxidant flame, on the contrary, when superoxidant flames are used K values are close 52, which improve wear resistance.

  19. The Escompte - Marseille 2001 International Field Experiment: Ground Based and Lidar Results Obtained At St. Chamas By The Epfl Mobile Laboratory

    NASA Astrophysics Data System (ADS)

    Balin, I.; Jimenez, R.; Simeonov, V.; Ristori, P.; Navarette, M.; van den Bergh, H.; Calpini, B.

    The assessment of the air pollution problems in term of understanding of the non- linear chemical mechanisms, the transport or the meteorological processes, and the choice of the abatement strategies could be based on the air pollution models. Nowa- days, very few of these models were validated due to the lack of 3D measurements. The goal of the ESCOMPTE experiment was to provide such of 3D database in order to constrain the air pollution models. The EPFL-LPA mobile laboratory was part of the ESCOMPTE extensive network and was located on the northern side of the Berre Lake at St.Chamas. In this framework, measurements of the air pollutants (O3, SO2, NOx, polycyclic aromatic hydrocarbons, black carbon and particulate matter of less than 10 microns mean diameter) and meteorological parameters (wind, temperature, pressure and relative humidity) were continuously performed from June 10 to July 13, 2001. They were combined with ground based lidar observations for ozone and aerosol estimation from 100m above ground level up to the free troposphere at ca.7 km agl. This paper will present an overview of the results obtained and will highlight one of the intensive observation period (IOP) during which clean air conditions were initially observed followed by highly polluted air masses during the second half of the IOP.

  20. Routing of Fatty Acids from Fresh Grass to Milk Restricts the Validation of Feeding Information Obtained by Measuring (13)C in Milk.

    PubMed

    Auerswald, Karl; Schäufele, Rudi; Bellof, Gerhard

    2015-12-09

    Dairy production systems vary widely in their feeding and livestock-keeping regimens. Both are well-known to affect milk quality and consumer perceptions. Stable isotope analysis has been suggested as an easy-to-apply tool to validate a claimed feeding regimen. Although it is unambiguous that feeding influences the carbon isotope composition (δ(13)C) in milk, it is not clear whether a reported feeding regimen can be verified by measuring δ(13)C in milk without sampling and analyzing the feed. We obtained 671 milk samples from 40 farms distributed over Central Europe to measure δ(13)C and fatty acid composition. Feeding protocols by the farmers in combination with a model based on δ(13)C feed values from the literature were used to predict δ(13)C in feed and subsequently in milk. The model considered dietary contributions of C3 and C4 plants, contribution of concentrates, altitude, seasonal variation in (12/13)CO2, Suess's effect, and diet-milk discrimination. Predicted and measured δ(13)C in milk correlated closely (r(2) = 0.93). Analyzing milk for δ(13)C allowed validation of a reported C4 component with an error of <8% in 95% of all cases. This included the error of the method (measurement and prediction) and the error of the feeding information. However, the error was not random but varied seasonally and correlated with the seasonal variation in long-chain fatty acids. This indicated a bypass of long-chain fatty acids from fresh grass to milk.

  1. Alberta infant motor scale: reliability and validity when used on preterm infants in Taiwan.

    PubMed

    Jeng, S F; Yau, K I; Chen, L C; Hsiao, S F

    2000-02-01

    The goal of this study was to examine the reliability and validity of measurements obtained with the Alberta Infant Motor Scale (AIMS) for evaluation of preterm infants in Taiwan. Two independent groups of preterm infants were used to investigate the reliability (n=45) and validity (n=41) for the AIMS. In the reliability study, the AIMS was administered to the infants by a physical therapist, and infant performance was videotaped. The performance was then rescored by the same therapist and by 2 other therapists to examine the intrarater and interrater reliability. In the validity study, the AIMS and the Bayley Motor Scale were administered to the infants at 6 and 12 months of age to examine criterion-related validity. Intraclass correlation coefficients (ICCs) for intrarater and interrater reliability of measurements obtained with the AIMS were high (ICC=.97-.99). The AIMS scores correlated with the Bayley Motor Scale scores at 6 and 12 months (r=.78 and.90), although the AIMS scores at 6 months were only moderately predictive of the motor function at 12 months (r=.56). The results suggest that measurements obtained with the AIMS have acceptable reliability and concurrent validity but limited predictive value for evaluating preterm Taiwanese infants.

  2. Validation of the Transient Structural Response of a Threaded Assembly: Phase I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doebling, Scott W.; Hemez, Francois M.; Robertson, Amy N.

    2004-04-01

    This report explores the application of model validation techniques in structural dynamics. The problem of interest is the propagation of an explosive-driven mechanical shock through a complex threaded joint. The study serves the purpose of assessing whether validating a large-size computational model is feasible, which unit experiments are required, and where the main sources of uncertainty reside. The results documented here are preliminary, and the analyses are exploratory in nature. The results obtained to date reveal several deficiencies of the analysis, to be rectified in future work.

  3. New high-definition thickness data obtained at tropical glaciers: preliminary results from Antisana volcano (Ecuador) using GPR prospection

    NASA Astrophysics Data System (ADS)

    Zapata, Camilo; Andrade, Daniel; Córdova, Jorge; Maisincho, Luis; Carvajal, Juan; Calispa, Marlon; Villacís, Marcos

    2014-05-01

    The study of tropical glaciers has been a significant contribution to the understanding of glacier dynamics and climate change. Much of the data and results have been obtained by analyzing plan-view images obtained by air- and space-borne sensors, as well as depth data obtained by diverse methodologies at selected points on the glacier surface. However, the measurement of glacier thicknesses has remained an elusive task in tropical glaciers, often located in rough terrains where the application of geophysical surveys (i.e. seismic surveys) requires logistics sometimes hardly justified by the amount of obtained data. In the case of Ecuador, however, where most glaciers have developed on active volcanoes and represent sources/reservoirs of fresh water, the precise knowledge of such information is fundamental for scientific research but also in order to better assess key aspects for the society. The relatively recent but fast development of the GPR technology has helped to obtain new highdefinition thickness data at Antisana volcano that will be used to: 1) better understand the dynamics and fate of tropical glaciers; 2) better estimate the amount of fresh water stored in the glaciers; 3) better assess the hazards associated with the sudden widespread melting of glaciers during volcanic eruptions. The measurements have been obtained at glaciers 12 and 15 of Antisana volcano, with the help of a commercial GPR equipped with a 25 MHz antenna. A total of 30 transects have been obtained, covering a distance of more than 3 km, from the glacier ablation zone, located at ~ 4600 masl, up to the level of 5200 masl. The preliminary results show a positive correlation between altitude and glacier thickness, with maximum and minimum calculated values reaching up to 80 m, and down to 15 m, respectively. The experience gained at Antisana volcano will be used to prepare a more widespread GPR survey in the glaciers of Cotopaxi volcano, whose implications in terms of volcanic hazards

  4. Validation of Erosion 3D in Lower Saxony - Comparison between modelled soil erosion events and results of a long term monitoring project

    NASA Astrophysics Data System (ADS)

    Bug, Jan; Mosimann, Thomas

    2013-04-01

    Since 2000 water erosion has been surveyed on 400 ha arable land in three different regions of Lower Saxony (Mosimann et al. 2009). The results of this long-term survey are used for the validation of the soil erosion models such as USLE and Erosion 3D. The validation of the physically-based model Erosion 3D (Schmidt & Werner 2000) is possible because the survey analyses the effects (soil loss, sediment yield, deposition on site) of single thunder storm events and also maps major factors of soil erosion (soil, crop, tillage). A 12.5 m Raster DEM was used to model the soil erosion events.Rainfalldata was acquired from climate stations. Soil and landuse parameters were derived from the "Parameterkatalog Sachsen"(Michael et al. 1996). During thirteen years of monitoring, high intensity storms fell less frequently than expected. High intensity rainfalls with a return period of five or ten years usually occurred during periods of maximum plant cover.Winter events were ruled out because dataon snow melt and rainfallwere not measured. The validation is therefore restricted to 80 events. The validation consists of three parts. The first part compares the spatial distribution of the mapped soil erosion with the model results. The second part calculates the difference in the amount of redistributed soil. The third part analyses off-site effects such as sediment yield and pollution of water bodies. The validation shows that the overall result of erosion 3D is quite good. Spatial hotspots of soil erosion and of off-site effects are predicted correctly in most cases. However, quantitative comparison is more problematic, because the mapping allows only the quantification of rillerosion and not of sheet erosion. So as a rule,the predicted soil loss is higher than the mapped. The prediction of rill development is also problematic. While the model is capable of predicting rills in thalwegs, the modelling of erosion in tractor tracks and headlands is more complicated. In order to

  5. Developing and Validating a Metacognitive Writing Questionnaire for EFL Learners

    ERIC Educational Resources Information Center

    Farahian, Majid

    2017-01-01

    In an attempt to develop a metacognitive writing questionnaire, Farahian (2015) conducted a study which was based on the results obtained from a semi-structured interview (Maftoon, Birjandi & Farahian, 2014). After running various exploratory factor analyses (EFA) to validate the questionnaire two general scales of knowledge and regulation of…

  6. Examining construct and predictive validity of the Health-IT Usability Evaluation Scale: confirmatory factor analysis and structural equation modeling results

    PubMed Central

    Yen, Po-Yin; Sousa, Karen H; Bakken, Suzanne

    2014-01-01

    Background In a previous study, we developed the Health Information Technology Usability Evaluation Scale (Health-ITUES), which is designed to support customization at the item level. Such customization matches the specific tasks/expectations of a health IT system while retaining comparability at the construct level, and provides evidence of its factorial validity and internal consistency reliability through exploratory factor analysis. Objective In this study, we advanced the development of Health-ITUES to examine its construct validity and predictive validity. Methods The health IT system studied was a web-based communication system that supported nurse staffing and scheduling. Using Health-ITUES, we conducted a cross-sectional study to evaluate users’ perception toward the web-based communication system after system implementation. We examined Health-ITUES's construct validity through first and second order confirmatory factor analysis (CFA), and its predictive validity via structural equation modeling (SEM). Results The sample comprised 541 staff nurses in two healthcare organizations. The CFA (n=165) showed that a general usability factor accounted for 78.1%, 93.4%, 51.0%, and 39.9% of the explained variance in ‘Quality of Work Life’, ‘Perceived Usefulness’, ‘Perceived Ease of Use’, and ‘User Control’, respectively. The SEM (n=541) supported the predictive validity of Health-ITUES, explaining 64% of the variance in intention for system use. Conclusions The results of CFA and SEM provide additional evidence for the construct and predictive validity of Health-ITUES. The customizability of Health-ITUES has the potential to support comparisons at the construct level, while allowing variation at the item level. We also illustrate application of Health-ITUES across stages of system development. PMID:24567081

  7. [Development And Validation Of A Breastfeeding Knowledge And Skills Questionnaire].

    PubMed

    Gómez Fernández-Vegue, M; Menéndez Orenga, M

    2015-12-01

    Pediatricians play a key role in the onset and duration of breastfeeding. Although it is known that they lack formal education on this subject, there are currently no validated tools available to assess pediatrician knowledge regarding breastfeeding. To develop and validate a Breastfeeding Knowledge and Skills Questionnaire for Pediatricians. Once the knowledge areas were defined, a representative sample of pediatricians was chosen to carry out the survey. After pilot testing, non-discriminating questions were removed. Content validity was assessed by 14 breastfeeding experts, who examined the test, yielding 22 scorable items (maximum score: 26 points). To approach criterion validity, it was hypothesized that a group of pediatricians with a special interest in breastfeeding (1) would obtain better results than pediatricians from a hospital without a maternity ward (2), and the latter would obtain a higher score than the medical residents of Pediatrics training in the same hospital (3). The questionnaire was also evaluated before and after a basic course in breastfeeding. Breastfeeding experts have an index of agreement of >.90 for each item. The 3 groups (n=82) were compared, finding significant differences between group (1) and the rest. Moreover, an improvement was observed in the participants who attended the breastfeeding course (n=31), especially among those with less initial knowledge. Regarding reliability, internal consistency (KR-20=.87), interobserver agreement, and temporal stability were examined, with satisfactory results. A practical and self-administered tool is presented to assess pediatrician knowledge regarding breastfeeding, with a documented validity and reliability. Copyright © 2014 Asociación Española de Pediatría. Published by Elsevier España, S.L.U. All rights reserved.

  8. Validation of Bayesian analysis of compartmental kinetic models in medical imaging.

    PubMed

    Sitek, Arkadiusz; Li, Quanzheng; El Fakhri, Georges; Alpert, Nathaniel M

    2016-10-01

    Kinetic compartmental analysis is frequently used to compute physiologically relevant quantitative values from time series of images. In this paper, a new approach based on Bayesian analysis to obtain information about these parameters is presented and validated. The closed-form of the posterior distribution of kinetic parameters is derived with a hierarchical prior to model the standard deviation of normally distributed noise. Markov chain Monte Carlo methods are used for numerical estimation of the posterior distribution. Computer simulations of the kinetics of F18-fluorodeoxyglucose (FDG) are used to demonstrate drawing statistical inferences about kinetic parameters and to validate the theory and implementation. Additionally, point estimates of kinetic parameters and covariance of those estimates are determined using the classical non-linear least squares approach. Posteriors obtained using methods proposed in this work are accurate as no significant deviation from the expected shape of the posterior was found (one-sided P>0.08). It is demonstrated that the results obtained by the standard non-linear least-square methods fail to provide accurate estimation of uncertainty for the same data set (P<0.0001). The results of this work validate new methods for a computer simulations of FDG kinetics. Results show that in situations where the classical approach fails in accurate estimation of uncertainty, Bayesian estimation provides an accurate information about the uncertainties in the parameters. Although a particular example of FDG kinetics was used in the paper, the methods can be extended for different pharmaceuticals and imaging modalities. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  9. Airport Landside - Volume III : ALSIM Calibration and Validation.

    DOT National Transportation Integrated Search

    1982-06-01

    This volume discusses calibration and validation procedures applied to the Airport Landside Simulation Model (ALSIM), using data obtained at Miami, Denver and LaGuardia Airports. Criteria for the selection of a validation methodology are described. T...

  10. A Possible Tool for Checking Errors in the INAA Results, Based on Neutron Data and Method Validation

    NASA Astrophysics Data System (ADS)

    Cincu, Em.; Grigore, Ioana Manea; Barbos, D.; Cazan, I. L.; Manu, V.

    2008-08-01

    This work presents preliminary results of a new type of possible application in the INAA experiments of elemental analysis, useful to check errors occurred during investigation of unknown samples; it relies on the INAA method validation experiments and accuracy of the neutron data from the literature. The paper comprises 2 sections, the first one presents—in short—the steps of the experimental tests carried out for INAA method validation and for establishing the `ACTIVA-N' laboratory performance, which is-at the same time-an illustration of the laboratory evolution on the way to get performance. Section 2 presents our recent INAA results on CRMs, of which interpretation opens discussions about the usefulness of using a tool for checking possible errors, different from the usual statistical procedures. The questionable aspects and the requirements to develop a practical checking tool are discussed.

  11. Utilization of Airborne and in Situ Data Obtained in SGP99, SMEX02, CLASIC and SMAPVEX08 Field Campaigns for SMAP Soil Moisture Algorithm Development and Validation

    NASA Technical Reports Server (NTRS)

    Colliander, Andreas; Chan, Steven; Yueh, Simon; Cosh, Michael; Bindlish, Rajat; Jackson, Tom; Njoku, Eni

    2010-01-01

    Field experiment data sets that include coincident remote sensing measurements and in situ sampling will be valuable in the development and validation of the soil moisture algorithms of the NASA's future SMAP (Soil Moisture Active and Passive) mission. This paper presents an overview of the field experiment data collected from SGP99, SMEX02, CLASIC and SMAPVEX08 campaigns. Common in these campaigns were observations of the airborne PALS (Passive and Active L- and S-band) instrument, which was developed to acquire radar and radiometer measurements at low frequencies. The combined set of the PALS measurements and ground truth obtained from all these campaigns was under study. The investigation shows that the data set contains a range of soil moisture values collected under a limited number of conditions. The quality of both PALS and ground truth data meets the needs of the SMAP algorithm development and validation. The data set has already made significant impact on the science behind SMAP mission. The areas where complementing of the data would be most beneficial are also discussed.

  12. ASTER Global Digital Elevation Model Version 2 - summary of validation results

    USGS Publications Warehouse

    Tachikawa, Tetushi; Kaku, Manabu; Iwasaki, Akira; Gesch, Dean B.; Oimoen, Michael J.; Zhang, Z.; Danielson, Jeffrey J.; Krieger, Tabatha; Curtis, Bill; Haase, Jeff; Abrams, Michael; Carabajal, C.; Meyer, Dave

    2011-01-01

    Based on these findings, the GDEM validation team recommends the release of the GDEM2 to the public, acknowledging that, while vastly improved, some artifacts still exist which could affect its utility in certain applications.

  13. Validation Database Based Thermal Analysis of an Advanced RPS Concept

    NASA Technical Reports Server (NTRS)

    Balint, Tibor S.; Emis, Nickolas D.

    2006-01-01

    Advanced RPS concepts can be conceived, designed and assessed using high-end computational analysis tools. These predictions may provide an initial insight into the potential performance of these models, but verification and validation are necessary and required steps to gain confidence in the numerical analysis results. This paper discusses the findings from a numerical validation exercise for a small advanced RPS concept, based on a thermal analysis methodology developed at JPL and on a validation database obtained from experiments performed at Oregon State University. Both the numerical and experimental configurations utilized a single GPHS module enabled design, resembling a Mod-RTG concept. The analysis focused on operating and environmental conditions during the storage phase only. This validation exercise helped to refine key thermal analysis and modeling parameters, such as heat transfer coefficients, and conductivity and radiation heat transfer values. Improved understanding of the Mod-RTG concept through validation of the thermal model allows for future improvements to this power system concept.

  14. Elaboration and Validation of the Medication Prescription Safety Checklist 1

    PubMed Central

    Pires, Aline de Oliveira Meireles; Ferreira, Maria Beatriz Guimarães; do Nascimento, Kleiton Gonçalves; Felix, Márcia Marques dos Santos; Pires, Patrícia da Silva; Barbosa, Maria Helena

    2017-01-01

    ABSTRACT Objective: to elaborate and validate a checklist to identify compliance with the recommendations for the structure of medication prescriptions, based on the Protocol of the Ministry of Health and the Brazilian Health Surveillance Agency. Method: methodological research, conducted through the validation and reliability analysis process, using a sample of 27 electronic prescriptions. Results: the analyses confirmed the content validity and reliability of the tool. The content validity, obtained by expert assessment, was considered satisfactory as it covered items that represent the compliance with the recommendations regarding the structure of the medication prescriptions. The reliability, assessed through interrater agreement, was excellent (ICC=1.00) and showed perfect agreement (K=1.00). Conclusion: the Medication Prescription Safety Checklist showed to be a valid and reliable tool for the group studied. We hope that this study can contribute to the prevention of adverse events, as well as to the improvement of care quality and safety in medication use. PMID:28793128

  15. Environmental Validation of Legionella Control in a VHA Facility Water System.

    PubMed

    Jinadatha, Chetan; Stock, Eileen M; Miller, Steve E; McCoy, William F

    2018-03-01

    OBJECTIVES We conducted this study to determine what sample volume, concentration, and limit of detection (LOD) are adequate for environmental validation of Legionella control. We also sought to determine whether time required to obtain culture results can be reduced compared to spread-plate culture method. We also assessed whether polymerase chain reaction (PCR) and in-field total heterotrophic aerobic bacteria (THAB) counts are reliable indicators of Legionella in water samples from buildings. DESIGN Comparative Legionella screening and diagnostics study for environmental validation of a healthcare building water system. SETTING Veterans Health Administration (VHA) facility water system in central Texas. METHODS We analyzed 50 water samples (26 hot, 24 cold) from 40 sinks and 10 showers using spread-plate cultures (International Standards Organization [ISO] 11731) on samples shipped overnight to the analytical lab. In-field, on-site cultures were obtained using the PVT (Phigenics Validation Test) culture dipslide-format sampler. A PCR assay for genus-level Legionella was performed on every sample. RESULTS No practical differences regardless of sample volume filtered were observed. Larger sample volumes yielded more detections of Legionella. No statistically significant differences at the 1 colony-forming unit (CFU)/mL or 10 CFU/mL LOD were observed. Approximately 75% less time was required when cultures were started in the field. The PCR results provided an early warning, which was confirmed by spread-plate cultures. The THAB results did not correlate with Legionella status. CONCLUSIONS For environmental validation at this facility, we confirmed that (1) 100 mL sample volumes were adequate, (2) 10× concentrations were adequate, (3) 10 CFU/mL LOD was adequate, (4) in-field cultures reliably reduced time to get results by 75%, (5) PCR provided a reliable early warning, and (6) THAB was not predictive of Legionella results. Infect Control Hosp Epidemiol 2018;39:259-266.

  16. Geodetic results from ISAGEX data. [for obtaining center of mass coordinates for geodetic camera sites

    NASA Technical Reports Server (NTRS)

    Marsh, J. G.; Douglas, B. C.; Walls, D. M.

    1974-01-01

    Laser and camera data taken during the International Satellite Geodesy Experiment (ISAGEX) were used in dynamical solutions to obtain center-of-mass coordinates for the Astro-Soviet camera sites at Helwan, Egypt, and Oulan Bator, Mongolia, as well as the East European camera sites at Potsdam, German Democratic Republic, and Ondrejov, Czechoslovakia. The results are accurate to about 20m in each coordinate. The orbit of PEOLE (i=15) was also determined from ISAGEX data. Mean Kepler elements suitable for geodynamic investigations are presented.

  17. Reliability and validity assessment of gastrointestinal dystemperaments questionnaire: a novel scale in Persian traditional medicine

    PubMed Central

    Hoseinzadeh, Hamidreza; Taghipour, Ali; Yousefi, Mahdi

    2018-01-01

    Background Development of a questionnaire based on the resources of Persian traditional medicine seems necessary. One of the problems faced by practitioners of traditional medicine is the different opinions regarding the diagnosis of general temperament or temperament of member. One of the reasons is the lack of validity tools, and it has led to difficulties in training the student of traditional medicine and the treatment of patients. The differences in the detection methods, have given rise to several treatment methods. Objective The present study aimed to develop a questionnaire and standard software for diagnosis of gastrointestinal dystemperaments. Methods The present research is a tool developing study which included 8 stages of developing the items, determining the statements based on items, assessing the face validity, assessing the content validity, assessing the reliability, rating the items, developing a software for calculation of the total score of the questionnaire named GDS v.1.1, and evaluating the concurrent validity using statistical tests including Cronbach’s alpha coefficient, Cohen’s kappa coefficient. Results Based on the results, 112 notes including 62 symptoms were extracted from resources, and 58 items were obtained from in-person interview sessions with a panel of experts. A statement was selected for each item and, after merging a number of statements, a total of 49 statements were finally obtained. By calculating the score of statement impact and determining the content validity, respectively, 6 and 10 other items were removed from the list of statements. Standardized Cronbach’s alpha for this questionnaire was obtained 0.795 and its concurrent validity was equal to 0.8. Conclusion A quantitative tool was developed for diagnosis and examination of gastrointestinal dystemperaments. The developed questionnaire is adequately reliable and valid for this purpose. In addition, the software can be used for clinical diagnosis. PMID

  18. Validation of SAM 2 and SAGE satellite

    NASA Technical Reports Server (NTRS)

    Kent, G. S.; Wang, P.-H.; Farrukh, U. O.; Yue, G. K.

    1987-01-01

    Presented are the results of a validation study of data obtained by the Stratospheric Aerosol and Gas Experiment I (SAGE I) and Stratospheric Aerosol Measurement II (SAM II) satellite experiments. The study includes the entire SAGE I data set (February 1979 - November 1981) and the first four and one-half years of SAM II data (October 1978 - February 1983). These data sets have been validated by their use in the analysis of dynamical, physical and chemical processes in the stratosphere. They have been compared with other existing data sets and the SAGE I and SAM II data sets intercompared where possible. The study has shown the data to be of great value in the study of the climatological behavior of stratospheric aerosols and ozone. Several scientific publications and user-oriented data summaries have appeared as a result of the work carried out under this contract.

  19. Assessing students' communication skills: validation of a global rating.

    PubMed

    Scheffer, Simone; Muehlinghaus, Isabel; Froehmel, Annette; Ortwein, Heiderose

    2008-12-01

    Communication skills training is an accepted part of undergraduate medical programs nowadays. In addition to learning experiences its importance should be emphasised by performance-based assessment. As detailed checklists have been shown to be not well suited for the assessment of communication skills for different reasons, this study aimed to validate a global rating scale. A Canadian instrument was translated to German and adapted to assess students' communication skills during an end-of-semester-OSCE. Subjects were second and third year medical students at the reformed track of the Charité-Universitaetsmedizin Berlin. Different groups of raters were trained to assess students' communication skills using the global rating scale. Validity testing included concurrent validity and construct validity: Judgements of different groups of raters were compared to expert ratings as a defined gold standard. Furthermore, the amount of agreement between scores obtained with this global rating scale and a different instrument for assessing communication skills was determined. Results show that communication skills can be validly assessed by trained non-expert raters as well as standardised patients using this instrument.

  20. Supersonic, nonlinear, attached-flow wing design for high lift with experimental validation

    NASA Technical Reports Server (NTRS)

    Pittman, J. L.; Miller, D. S.; Mason, W. H.

    1984-01-01

    Results of the experimental validation are presented for the three dimensional cambered wing which was designed to achieve attached supercritical cross flow for lifting conditions typical of supersonic maneuver. The design point was a lift coefficient of 0.4 at Mach 1.62 and 12 deg angle of attack. Results from the nonlinear full potential method are presented to show the validity of the design process along with results from linear theory codes. Longitudinal force and moment data and static pressure data were obtained in the Langley Unitary Plan Wind Tunnel at Mach numbers of 1.58, 1.62, 1.66, 1.70, and 2.00 over an angle of attack range of 0 to 14 deg at a Reynolds number of 2.0 x 10 to the 6th power per foot. Oil flow photographs of the upper surface were obtained at M = 1.62 for alpha approx. = 8, 10, 12, and 14 deg.

  1. Process Skill Assessment Instrument: Innovation to measure student’s learning result holistically

    NASA Astrophysics Data System (ADS)

    Azizah, K. N.; Ibrahim, M.; Widodo, W.

    2018-01-01

    Science process skills (SPS) are very important skills for students. However, the fact that SPS is not being main concern in the primary school learning is undeniable. This research aimed to develop a valid, practical, and effective assessment instrument to measure student’s SPS. Assessment instruments comprise of worksheet and test. This development research used one group pre-test post-test design. Data were obtained with validation, observation, and test method to investigate validity, practicality, and the effectivenss of the instruments. Results showed that the validity of assessment instruments is very valid, the reliability is categorized as reliable, student SPS activities have a high percentage, and there is significant improvement on student’s SPS score. It can be concluded that assessment instruments of SPS are valid, practical, and effective to be used to measure student’s SPS result.

  2. Preliminary Results Obtained in Integrated Safety Analysis of NASA Aviation Safety Program Technologies

    NASA Technical Reports Server (NTRS)

    2001-01-01

    This is a listing of recent unclassified RTO technical publications processed by the NASA Center for AeroSpace Information from January 1, 2001 through March 31, 2001 available on the NASA Aeronautics and Space Database. Contents include 1) Cognitive Task Analysis; 2) RTO Educational Notes; 3) The Capability of Virtual Reality to Meet Military Requirements; 4) Aging Engines, Avionics, Subsystems and Helicopters; 5) RTO Meeting Proceedings; 6) RTO Technical Reports; 7) Low Grazing Angle Clutter...; 8) Verification and Validation Data for Computational Unsteady Aerodynamics; 9) Space Observation Technology; 10) The Human Factor in System Reliability...; 11) Flight Control Design...; 12) Commercial Off-the-Shelf Products in Defense Applications.

  3. Using the web to validate document recognition results: experiments with business cards

    NASA Astrophysics Data System (ADS)

    Oertel, Clemens; O'Shea, Shauna; Bodnar, Adam; Blostein, Dorothea

    2004-12-01

    The World Wide Web is a vast information resource which can be useful for validating the results produced by document recognizers. Three computational steps are involved, all of them challenging: (1) use the recognition results in a Web search to retrieve Web pages that contain information similar to that in the document, (2) identify the relevant portions of the retrieved Web pages, and (3) analyze these relevant portions to determine what corrections (if any) should be made to the recognition result. We have conducted exploratory implementations of steps (1) and (2) in the business-card domain: we use fields of the business card to retrieve Web pages and identify the most relevant portions of those Web pages. In some cases, this information appears suitable for correcting OCR errors in the business card fields. In other cases, the approach fails due to stale information: when business cards are several years old and the business-card holder has changed jobs, then websites (such as the home page or company website) no longer contain information matching that on the business card. Our exploratory results indicate that in some domains it may be possible to develop effective means of querying the Web with recognition results, and to use this information to correct the recognition results and/or detect that the information is stale.

  4. Using the web to validate document recognition results: experiments with business cards

    NASA Astrophysics Data System (ADS)

    Oertel, Clemens; O'Shea, Shauna; Bodnar, Adam; Blostein, Dorothea

    2005-01-01

    The World Wide Web is a vast information resource which can be useful for validating the results produced by document recognizers. Three computational steps are involved, all of them challenging: (1) use the recognition results in a Web search to retrieve Web pages that contain information similar to that in the document, (2) identify the relevant portions of the retrieved Web pages, and (3) analyze these relevant portions to determine what corrections (if any) should be made to the recognition result. We have conducted exploratory implementations of steps (1) and (2) in the business-card domain: we use fields of the business card to retrieve Web pages and identify the most relevant portions of those Web pages. In some cases, this information appears suitable for correcting OCR errors in the business card fields. In other cases, the approach fails due to stale information: when business cards are several years old and the business-card holder has changed jobs, then websites (such as the home page or company website) no longer contain information matching that on the business card. Our exploratory results indicate that in some domains it may be possible to develop effective means of querying the Web with recognition results, and to use this information to correct the recognition results and/or detect that the information is stale.

  5. Fuzzy-logic based strategy for validation of multiplex methods: example with qualitative GMO assays.

    PubMed

    Bellocchi, Gianni; Bertholet, Vincent; Hamels, Sandrine; Moens, W; Remacle, José; Van den Eede, Guy

    2010-02-01

    This paper illustrates the advantages that a fuzzy-based aggregation method could bring into the validation of a multiplex method for GMO detection (DualChip GMO kit, Eppendorf). Guidelines for validation of chemical, bio-chemical, pharmaceutical and genetic methods have been developed and ad hoc validation statistics are available and routinely used, for in-house and inter-laboratory testing, and decision-making. Fuzzy logic allows summarising the information obtained by independent validation statistics into one synthetic indicator of overall method performance. The microarray technology, introduced for simultaneous identification of multiple GMOs, poses specific validation issues (patterns of performance for a variety of GMOs at different concentrations). A fuzzy-based indicator for overall evaluation is illustrated in this paper, and applied to validation data for different genetically modified elements. Remarks were drawn on the analytical results. The fuzzy-logic based rules were shown to be applicable to improve interpretation of results and facilitate overall evaluation of the multiplex method.

  6. Advanced information processing system: Fault injection study and results

    NASA Technical Reports Server (NTRS)

    Burkhardt, Laura F.; Masotto, Thomas K.; Lala, Jaynarayan H.

    1992-01-01

    The objective of the AIPS program is to achieve a validated fault tolerant distributed computer system. The goals of the AIPS fault injection study were: (1) to present the fault injection study components addressing the AIPS validation objective; (2) to obtain feedback for fault removal from the design implementation; (3) to obtain statistical data regarding fault detection, isolation, and reconfiguration responses; and (4) to obtain data regarding the effects of faults on system performance. The parameters are described that must be varied to create a comprehensive set of fault injection tests, the subset of test cases selected, the test case measurements, and the test case execution. Both pin level hardware faults using a hardware fault injector and software injected memory mutations were used to test the system. An overview is provided of the hardware fault injector and the associated software used to carry out the experiments. Detailed specifications are given of fault and test results for the I/O Network and the AIPS Fault Tolerant Processor, respectively. The results are summarized and conclusions are given.

  7. An empirical assessment of validation practices for molecular classifiers

    PubMed Central

    Castaldi, Peter J.; Dahabreh, Issa J.

    2011-01-01

    Proposed molecular classifiers may be overfit to idiosyncrasies of noisy genomic and proteomic data. Cross-validation methods are often used to obtain estimates of classification accuracy, but both simulations and case studies suggest that, when inappropriate methods are used, bias may ensue. Bias can be bypassed and generalizability can be tested by external (independent) validation. We evaluated 35 studies that have reported on external validation of a molecular classifier. We extracted information on study design and methodological features, and compared the performance of molecular classifiers in internal cross-validation versus external validation for 28 studies where both had been performed. We demonstrate that the majority of studies pursued cross-validation practices that are likely to overestimate classifier performance. Most studies were markedly underpowered to detect a 20% decrease in sensitivity or specificity between internal cross-validation and external validation [median power was 36% (IQR, 21–61%) and 29% (IQR, 15–65%), respectively]. The median reported classification performance for sensitivity and specificity was 94% and 98%, respectively, in cross-validation and 88% and 81% for independent validation. The relative diagnostic odds ratio was 3.26 (95% CI 2.04–5.21) for cross-validation versus independent validation. Finally, we reviewed all studies (n = 758) which cited those in our study sample, and identified only one instance of additional subsequent independent validation of these classifiers. In conclusion, these results document that many cross-validation practices employed in the literature are potentially biased and genuine progress in this field will require adoption of routine external validation of molecular classifiers, preferably in much larger studies than in current practice. PMID:21300697

  8. Main results and experience obtained on Mir space station and experiment program for Russian segment of ISS.

    PubMed

    Utkin, V F; Lukjashchenko, V I; Borisov, V V; Suvorov, V V; Tsymbalyuk, M M

    2003-07-01

    This article presents main scientific and practical results obtained in course of scientific and applied research and experiments on Mir space station. Based on Mir experience, processes of research program formation for the Russian Segment of the ISS are briefly described. The major trends of activities planned in the frames of these programs as well as preliminary results of increment research programs implementation in the ISS' first missions are also presented. c2003 Elsevier Science Ltd. All rights reserved.

  9. Performance validation of the ANSER control laws for the F-18 HARV

    NASA Technical Reports Server (NTRS)

    Messina, Michael D.

    1995-01-01

    The ANSER control laws were implemented in Ada by NASA Dryden for flight test on the High Alpha Research Vehicle (HARV). The Ada implementation was tested in the hardware-in-the-loop (HIL) simulation, and results were compared to those obtained with the NASA Langley batch Fortran implementation of the control laws which are considered the 'truth model.' This report documents the performance validation test results between these implementations. This report contains the ANSER performance validation test plan, HIL versus batch time-history comparisons, simulation scripts used to generate checkcases, and detailed analysis of discrepancies discovered during testing.

  10. Performance validation of the ANSER Control Laws for the F-18 HARV

    NASA Technical Reports Server (NTRS)

    Messina, Michael D.

    1995-01-01

    The ANSER control laws were implemented in Ada by NASA Dryden for flight test on the High Alpha Research Vehicle (HARV). The Ada implementation was tested in the hardware-in-the-loop (HIL) simulation, and results were compared to those obtained with the NASA Langley batch Fortran implementation of the control laws which are considered the 'truth model'. This report documents the performance validation test results between these implementations. This report contains the ANSER performance validation test plan, HIL versus batch time-history comparisons, simulation scripts used to generate checkcases, and detailed analysis of discrepancies discovered during testing.

  11. Improving a DSM Obtained by Unmanned Aerial Vehicles for Flood Modelling

    NASA Astrophysics Data System (ADS)

    Mourato, Sandra; Fernandez, Paulo; Pereira, Luísa; Moreira, Madalena

    2017-12-01

    According to the EU flood risks directive, flood hazard map must be used to assess the flood risk. These maps can be developed with hydraulic modelling tools using a Digital Surface Runoff Model (DSRM). During the last decade, important evolutions of the spatial data processing has been developed which will certainly improve the hydraulic models results. Currently, images acquired with Red/Green/Blue (RGB) camera transported by Unmanned Aerial Vehicles (UAV) are seen as a good alternative data sources to represent the terrain surface with a high level of resolution and precision. The question is if the digital surface model obtain with this data is adequate enough for a good representation of the hydraulics flood characteristics. For this purpose, the hydraulic model HEC-RAS was run with 4 different DSRM for an 8.5 km reach of the Lis River in Portugal. The computational performance of the 4 modelling implementations is evaluated. Two hydrometric stations water level records were used as boundary conditions of the hydraulic model. The records from a third hydrometric station were used to validate the optimal DSRM. The HEC-RAS results had the best performance during the validation step were the ones where the DSRM with integration of the two altimetry data sources.

  12. Initial Validation and Results of Geoscience Laser Altimeter System Optical Properties Retrievals

    NASA Technical Reports Server (NTRS)

    Hlavka, Dennis L.; Hart, W. D.; Pal, S. P.; McGill, M.; Spinhirne, J. D.

    2004-01-01

    Verification of Geoscience Laser Altimeter System (GLAS) optical retrievals is . problematic in that passage over ground sites is both instantaneous and sparse plus space-borne passive sensors such as MODIS are too frequently out of sync with the GLAS position. In October 2003, the GLAS Validation Experiment was executed from NASA Dryden Research Center, California to greatly increase validation possibilities. The high-altitude NASA ER-2 aircraft and onboard instrumentation of Cloud Physics Lidar (CPL), MODIS Airborne Simulator (MAS), and/or MODIS/ASTER Airborne Simulator (MASTER) under-flew seven orbit tracks of GLAS for cirrus, smoke, and urban pollution optical properties inter-comparisons. These highly calibrated suite of instruments are the best data set yet to validate GLAS atmospheric parameters. In this presentation, we will focus on the inter-comparison with GLAS and CPL and draw preliminary conclusions about the accuracies of the GLAS 532nm retrievals of optical depth, extinction, backscatter cross section, and calculated extinction-to-backscatter ratio. Comparisons to an AERONET/MPL ground-based site at Monterey, California will be attempted. Examples of GLAS operational optical data products will be shown.

  13. MRI-based modeling for radiocarpal joint mechanics: validation criteria and results for four specimen-specific models.

    PubMed

    Fischer, Kenneth J; Johnson, Joshua E; Waller, Alexander J; McIff, Terence E; Toby, E Bruce; Bilgen, Mehmet

    2011-10-01

    The objective of this study was to validate the MRI-based joint contact modeling methodology in the radiocarpal joints by comparison of model results with invasive specimen-specific radiocarpal contact measurements from four cadaver experiments. We used a single validation criterion for multiple outcome measures to characterize the utility and overall validity of the modeling approach. For each experiment, a Pressurex film and a Tekscan sensor were sequentially placed into the radiocarpal joints during simulated grasp. Computer models were constructed based on MRI visualization of the cadaver specimens without load. Images were also acquired during the loaded configuration used with the direct experimental measurements. Geometric surface models of the radius, scaphoid and lunate (including cartilage) were constructed from the images acquired without the load. The carpal bone motions from the unloaded state to the loaded state were determined using a series of 3D image registrations. Cartilage thickness was assumed uniform at 1.0 mm with an effective compressive modulus of 4 MPa. Validation was based on experimental versus model contact area, contact force, average contact pressure and peak contact pressure for the radioscaphoid and radiolunate articulations. Contact area was also measured directly from images acquired under load and compared to the experimental and model data. Qualitatively, there was good correspondence between the MRI-based model data and experimental data, with consistent relative size, shape and location of radioscaphoid and radiolunate contact regions. Quantitative data from the model generally compared well with the experimental data for all specimens. Contact area from the MRI-based model was very similar to the contact area measured directly from the images. For all outcome measures except average and peak pressures, at least two specimen models met the validation criteria with respect to experimental measurements for both articulations

  14. Diagnostic value of blood-derived microRNAs for schizophrenia: results of a meta-analysis and validation.

    PubMed

    Liu, Sha; Zhang, Fuquan; Wang, Xijin; Shugart, Yin Yao; Zhao, Yingying; Li, Xinrong; Liu, Zhifen; Sun, Ning; Yang, Chunxia; Zhang, Kerang; Yue, Weihua; Yu, Xin; Xu, Yong

    2017-11-10

    There is an increasing interest in searching biomarkers for schizophrenia (SZ) diagnosis, which overcomes the drawbacks inherent with the subjective diagnostic methods. MicroRNA (miRNA) fingerprints have been explored for disease diagnosis. We performed a meta-analysis to examine miRNA diagnostic value for SZ and further validated the meta-analysis results. Using following terms: schizophrenia/SZ, microRNA/miRNA, diagnosis, sensitivity and specificity, we searched databases restricted to English language and reviewed all articles published from January 1990 to October 2016. All extracted data were statistically analyzed and the results were further validated with peripheral blood mononuclear cells (PBMNCs) isolated from patients and healthy controls using RT-qPCR and receiver operating characteristic (ROC) analysis. A total of 6 studies involving 330 patients and 202 healthy controls were included for meta-analysis. The pooled sensitivity, specificity and diagnostic odds ratio were 0.81 (95% CI: 0.75-0.86), 0.81 (95% CI: 0.72-0.88) and 18 (95% CI: 9-34), respectively; the positive and negative likelihood ratio was 4.3 and 0.24 respectively; the area under the curve in summary ROC was 0.87 (95% CI: 0.84-0.90). Validation revealed that miR-181b-5p, miR-21-5p, miR-195-5p, miR-137, miR-346 and miR-34a-5p in PBMNCs had high diagnostic sensitivity and specificity in the context of schizophrenia. In conclusion, blood-derived miRNAs might be promising biomarkers for SZ diagnosis.

  15. Preliminary results of the Geoid Slope Validation Survey 2014 in Iowa

    NASA Astrophysics Data System (ADS)

    Wang, Y. M.; Becker, C.; Breidenbach, S.; Geoghegan, C.; Martin, D.; Winester, D.; Hanson, T.; Mader, G. L.; Eckl, M. C.

    2014-12-01

    The National Geodetic Survey conducted a second Geoid Slope Validation Survey in the summer of 2014 (GSVS14). The survey took place in Iowa along U.S Route 30. The survey line is approximately 200 miles long (325 km), extending from Denison, IA to Cedar Rapids, IA. There are over 200 official survey bench marks. A leveling survey was performed, conforming to 1st order, class II specifications. A GPS survey was performed using 24 to 48 hour occupations. Absolute gravity, relative gravity, and gravity gradient measurements were also collected during the survey. In addition, deflections of the vertical were acquired at 200 eccentric survey benchmarks using the Compact Digital Astrometric Camera (CODIAC) camera. This paper presents the preliminary results of the survey, including the accuracy analysis of the leveling data, GPS ellipsoidal heights, and the deflections of the vertical which serves as an independent data set in addition to the GPS/leveling implied geoid heights.

  16. Predictive validity of cannabis consumption measures: Results from a national longitudinal study.

    PubMed

    Buu, Anne; Hu, Yi-Han; Pampati, Sanjana; Arterberry, Brooke J; Lin, Hsien-Chang

    2017-10-01

    Validating the utility of cannabis consumption measures for predicting later cannabis related symptomatology or progression to cannabis use disorder (CUD) is crucial for prevention and intervention work that may use consumption measures for quick screening. This study examined whether cannabis use quantity and frequency predicted CUD symptom counts, progression to onset of CUD, and persistence of CUD. Data from the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC) at Wave 1 (2001-2002) and Wave 2 (2004-2005) were used to identify three risk samples: (1) current cannabis users at Wave 1 who were at risk for having CUD symptoms at Wave 2; (2) current users without lifetime CUD who were at risk for incident CUD; and (3) current users with past-year CUD who were at risk for persistent CUD. Logistic regression and zero-inflated Poisson models were used to examine the longitudinal effect of cannabis consumption on CUD outcomes. Higher frequency of cannabis use predicted lower likelihood of being symptom-free but it did not predict the severity of CUD symptomatology. Higher frequency of cannabis use also predicted higher likelihood of progression to onset of CUD and persistence of CUD. Cannabis use quantity, however, did not predict any of the developmental stages of CUD symptomatology examined in this study. This study has provided a new piece of evidence to support the predictive validity of cannabis use frequency based on national longitudinal data. The result supports the common practice of including frequency items in cannabis screening tools. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. The earth radiation budget experiment: Early validation results

    NASA Astrophysics Data System (ADS)

    Smith, G. Louis; Barkstrom, Bruce R.; Harrison, Edwin F.

    The Earth Radiation Budget Experiment (ERBE) consists of radiometers on a dedicated spacecraft in a 57° inclination orbit, which has a precessional period of 2 months, and on two NOAA operational meteorological spacecraft in near polar orbits. The radiometers include scanning narrow field-of-view (FOV) and nadir-looking wide and medium FOV radiometers covering the ranges 0.2 to 5 μm and 5 to 50 μm and a solar monitoring channel. This paper describes the validation procedures and preliminary results. Each of the radiometer channels underwent extensive ground calibration, and the instrument packages include in-flight calibration facilities which, to date, show negligible changes of the instruments in orbit, except for gradual degradation of the suprasil dome of the shortwave wide FOV (about 4% per year). Measurements of the solar constant by the solar monitors, wide FOV, and medium FOV radiometers of two spacecraft agree to a fraction of a percent. Intercomparisons of the wide and medium FOV radiometers with the scanning radiometers show agreement of 1 to 4%. The multiple ERBE satellites are acquiring the first global measurements of regional scale diurnal variations in the Earth's radiation budget. These diurnal variations are verified by comparison with high temporal resolution geostationary satellite data. Other principal investigators of the ERBE Science Team are: R. Cess, SUNY, Stoneybrook; J. Coakley, NCAR; C. Duncan, M. King and A Mecherikunnel, Goddard Space Flight Center, NASA; A. Gruber and A.J. Miller, NOAA; D. Hartmann, U. Washington; F.B. House, Drexel U.; F.O. Huck, Langley Research Center, NASA; G. Hunt, Imperial College, London U.; R. Kandel and A. Berroir, Laboratory of Dynamic Meteorology, Ecole Polytechique; V. Ramanathan, U. Chicago; E. Raschke, U. of Cologne; W.L. Smith, U. of Wisconsin and T.H. Vonder Haar, Colorado State U.

  18. 25 CFR 162.539 - Must I obtain a WEEL before obtaining a WSR lease?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... AND PERMITS Wind and Solar Resource Leases Wsr Leases § 162.539 Must I obtain a WEEL before obtaining... direct result of energy resource information gathered from a WEEL activity, obtaining a WEEL is not a...

  19. 25 CFR 162.539 - Must I obtain a WEEL before obtaining a WSR lease?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... AND PERMITS Wind and Solar Resource Leases Wsr Leases § 162.539 Must I obtain a WEEL before obtaining... direct result of energy resource information gathered from a WEEL activity, obtaining a WEEL is not a...

  20. Summarising and validating test accuracy results across multiple studies for use in clinical practice.

    PubMed

    Riley, Richard D; Ahmed, Ikhlaaq; Debray, Thomas P A; Willis, Brian H; Noordzij, J Pieter; Higgins, Julian P T; Deeks, Jonathan J

    2015-06-15

    Following a meta-analysis of test accuracy studies, the translation of summary results into clinical practice is potentially problematic. The sensitivity, specificity and positive (PPV) and negative (NPV) predictive values of a test may differ substantially from the average meta-analysis findings, because of heterogeneity. Clinicians thus need more guidance: given the meta-analysis, is a test likely to be useful in new populations, and if so, how should test results inform the probability of existing disease (for a diagnostic test) or future adverse outcome (for a prognostic test)? We propose ways to address this. Firstly, following a meta-analysis, we suggest deriving prediction intervals and probability statements about the potential accuracy of a test in a new population. Secondly, we suggest strategies on how clinicians should derive post-test probabilities (PPV and NPV) in a new population based on existing meta-analysis results and propose a cross-validation approach for examining and comparing their calibration performance. Application is made to two clinical examples. In the first example, the joint probability that both sensitivity and specificity will be >80% in a new population is just 0.19, because of a low sensitivity. However, the summary PPV of 0.97 is high and calibrates well in new populations, with a probability of 0.78 that the true PPV will be at least 0.95. In the second example, post-test probabilities calibrate better when tailored to the prevalence in the new population, with cross-validation revealing a probability of 0.97 that the observed NPV will be within 10% of the predicted NPV. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  1. Hydrological Validation of The Lpj Dynamic Global Vegetation Model - First Results and Required Actions

    NASA Astrophysics Data System (ADS)

    Haberlandt, U.; Gerten, D.; Schaphoff, S.; Lucht, W.

    Dynamic global vegetation models are developed with the main purpose to describe the spatio-temporal dynamics of vegetation at the global scale. Increasing concern about climate change impacts has put the focus of recent applications on the sim- ulation of the global carbon cycle. Water is a prime driver of biogeochemical and biophysical processes, thus an appropriate representation of the water cycle is crucial for their proper simulation. However, these models usually lack thorough validation of the water balance they produce. Here we present a hydrological validation of the current version of the LPJ (Lund- Potsdam-Jena) model, a dynamic global vegetation model operating at daily time steps. Long-term simulated runoff and evapotranspiration are compared to literature values, results from three global hydrological models, and discharge observations from various macroscale river basins. It was found that the seasonal and spatial patterns of the LPJ-simulated average values correspond well both with the measurements and the results from the stand-alone hy- drological models. However, a general underestimation of runoff occurs, which may be attributable to the low input dynamics of precipitation (equal distribution within a month), to the simulated vegetation pattern (potential vegetation without anthro- pogenic influence), and to some generalizations of the hydrological components in LPJ. Future research will focus on a better representation of the temporal variability of climate forcing, improved description of hydrological processes, and on the consider- ation of anthropogenic land use.

  2. Convergent and Discriminant Validation of Student Ratings of College Instructors.

    ERIC Educational Resources Information Center

    Hillery, Joseph M.; Yukl, Gary A.

    This paper reports the results of a validation study of data obtained from a teacher rating survey conducted by the University of Akron Student Council during the Fall 1969. The rating questionnaire consisted of 14 times: two items measured the student's overall evaluation of his instructor; 5 items measured specific performance dimensions such as…

  3. Do placebo based validation standards mimic real batch products behaviour? Case studies.

    PubMed

    Bouabidi, A; Talbi, M; Bouklouze, A; El Karbane, M; Bourichi, H; El Guezzar, M; Ziemons, E; Hubert, Ph; Rozet, E

    2011-06-01

    Analytical methods validation is a mandatory step to evaluate the ability of developed methods to provide accurate results for their routine application. Validation usually involves validation standards or quality control samples that are prepared in placebo or reconstituted matrix made of a mixture of all the ingredients composing the drug product except the active substance or the analyte under investigation. However, one of the main concerns that can be made with this approach is that it may lack an important source of variability that come from the manufacturing process. The question that remains at the end of the validation step is about the transferability of the quantitative performance from validation standards to real authentic drug product samples. In this work, this topic is investigated through three case studies. Three analytical methods were validated using the commonly spiked placebo validation standards at several concentration levels as well as using samples coming from authentic batch samples (tablets and syrups). The results showed that, depending on the type of response function used as calibration curve, there were various degrees of differences in the results accuracy obtained with the two types of samples. Nonetheless the use of spiked placebo validation standards was showed to mimic relatively well the quantitative behaviour of the analytical methods with authentic batch samples. Adding these authentic batch samples into the validation design may help the analyst to select and confirm the most fit for purpose calibration curve and thus increase the accuracy and reliability of the results generated by the method in routine application. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. [Validation of the VISA-A-G questionnaire for German-speaking patients suffering from Haglund's disease].

    PubMed

    Lohrer, H; Nauck, T

    2010-06-01

    The VISA-A questionnaire is currently the only valid, reliable, and disease specific patient administered questionnaire for research in Achilles tendinopathy. To perform multinational and multilingual investigations this instrument was already adapted to several languages. According to the "guidelines for the process of cross-cultural adaptation of self-report measures" we already translated and validated the VISA-A questionnaire for patients with Achilles tendinopathy. To cross-culturally adapt and validate the VISA-A Questionnaire for German-speaking patients suffering from Haglund's disease. The VISA-A-G questionnaire was tested for reliability, validity, and internal consistency in 39 Haglund's disease patients and 79 asymptomatic persons. For concurrent validity the VISA-A-G was compared with the Curwin and Stanish tendon grading system and with the Percy and Conochie classification system for the effect of pain on athletic performance. VISA-A-G results in Haglund's disease were additionally compared with VISA-A-G results obtained from Achilles tendinopathy patients and with VISA-A results presented in the international literature. ICC for the VISA-A-G questionnaire in conservatively treated Haglund's disease patients was 0.96. In asymptomatic students and joggers ICC was 0.97 and 0.60. When correlated with the grading system of Curwin and Stanish and with the Percy and Conochie classification rho was -0.95 and 0.94, respectively. Internal consistency (Cronbach's alpha) for the total VISA-A-G scores of the patients was calculated to be 0.87. Compared with VISA-A-G results obtained from Achilles tendinopathy patients there was no relevant difference discernible. Compared with VISA-A results presented in the original publication no difference was found statistically for students, healthy people, conservative, and preoperative patients, respectively. This study confirms that the VISA-A-G is a valid and reliable measure for German-speaking patients suffering from

  5. An intercomparison of a large ensemble of statistical downscaling methods for Europe: Overall results from the VALUE perfect predictor cross-validation experiment

    NASA Astrophysics Data System (ADS)

    Gutiérrez, Jose Manuel; Maraun, Douglas; Widmann, Martin; Huth, Radan; Hertig, Elke; Benestad, Rasmus; Roessler, Ole; Wibig, Joanna; Wilcke, Renate; Kotlarski, Sven

    2016-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research (http://www.value-cost.eu). A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. This framework is based on a user-focused validation tree, guiding the selection of relevant validation indices and performance measures for different aspects of the validation (marginal, temporal, spatial, multi-variable). Moreover, several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur (assessment of intrinsic performance, effect of errors inherited from the global models, effect of non-stationarity, etc.). The list of downscaling experiments includes 1) cross-validation with perfect predictors, 2) GCM predictors -aligned with EURO-CORDEX experiment- and 3) pseudo reality predictors (see Maraun et al. 2015, Earth's Future, 3, doi:10.1002/2014EF000259, for more details). The results of these experiments are gathered, validated and publicly distributed through the VALUE validation portal, allowing for a comprehensive community-open downscaling intercomparison study. In this contribution we describe the overall results from Experiment 1), consisting of a European wide 5-fold cross-validation (with consecutive 6-year periods from 1979 to 2008) using predictors from ERA-Interim to downscale precipitation and temperatures (minimum and maximum) over a set of 86 ECA&D stations representative of the main geographical and climatic regions in Europe. As a result of the open call for contribution to this experiment (closed in Dec. 2015), over 40 methods representative of the main approaches (MOS and Perfect Prognosis, PP) and techniques (linear scaling, quantile mapping, analogs, weather typing, linear and generalized regression, weather generators, etc.) were submitted, including information both data

  6. Satisfaction with information provided to Danish cancer patients: validation and survey results.

    PubMed

    Ross, Lone; Petersen, Morten Aagaard; Johnsen, Anna Thit; Lundstrøm, Louise Hyldborg; Groenvold, Mogens

    2013-11-01

    To validate five items (CPWQ-inf) regarding satisfaction with information provided to cancer patients from health care staff, assess the prevalence of dissatisfaction with this information, and identify factors predicting dissatisfaction. The questionnaire was validated by patient-observer agreement and cognitive interviews. The prevalence of dissatisfaction was assessed in a cross-sectional sample of all cancer patients in contact with hospitals during the past year in three Danish counties. The validation showed that the CPWQ performed well. Between 3 and 23% of the 1490 participating patients were dissatisfied with each of the measured aspects of information. The highest level of dissatisfaction was reported regarding the guidance, support and help provided when the diagnosis was given. Younger patients were consistently more dissatisfied than older patients. The brief CPWQ performs well for survey purposes. The survey depicts the heterogeneous patient population encountered by hospital staff and showed that younger patients probably had higher expectations or a higher need for information and that those with more severe diagnoses/prognoses require extra care in providing information. Four brief questions can efficiently assess information needs. With increasing demands for information, a wide range of innovative initiatives is needed. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  7. Development and validation of an educational booklet for healthy eating during pregnancy1

    PubMed Central

    de Oliveira, Sheyla Costa; Lopes, Marcos Venícios de Oliveira; Fernandes, Ana Fátima Carvalho

    2014-01-01

    OBJECTIVE: to describe the validation process of an educational booklet for healthy eating in pregnancy using local and regional food. METHODS: methodological study, developed in three steps: construction of the educational booklet, validation of the educational material by judges, and by pregnant women. The validation process was conducted by 22 judges and 20 pregnant women, by convenience selection. We considered a p-value<0.85 to validate the booklet compliance and relevance, according to the six items of the instrument. As for content validation, the item-level Content Validity Index (I-CVI) was considered when a minimum score of at least 0.80 was obtained. RESULTS: five items were considered relevant by the judges. The mean I-CVI was 0.91. The pregnant women evaluated positively the booklet. The suggestions were accepted and included in the final version of the material. CONCLUSION: the booklet was validated in terms of content and relevance, and should be used by nurses for advice on healthy eating during pregnancy. PMID:25296145

  8. Concurrent validity of the Learning and Study Strategies Inventory (LASSI): a study of African American precollege students.

    PubMed

    Flowers, Lamont A; Bridges, Brian K; Moore III, James L

    2012-01-01

    Concurrent validation procedures were employed, using a sample of African American precollege students, to determine the extent to which scale scores obtained from the first edition of the Learning and Study Strategies Inventory (LASSI) were appropriate for diagnostic purposes. Data analysis revealed that 2 of the 10 LASSI scales (i.e., Anxiety and Test Strategies) significantly correlated with a measure of academic ability. These results suggested that scores obtained from these LASSI scales may provide valid assessments of African American precollege students’ academic aptitude. Implications for teachers, school counselors, and developmental studies professionals were discussed.

  9. Qualitative Validation of the IMM Model for ISS and STS Programs

    NASA Technical Reports Server (NTRS)

    Kerstman, E.; Walton, M.; Reyes, D.; Boley, L.; Saile, L.; Young, M.; Arellano, J.; Garcia, Y.; Myers, J. G.

    2016-01-01

    To validate and further improve the Integrated Medical Model (IMM), medical event data were obtained from 32 ISS and 122 STS person-missions. Using the crew characteristics from these observed missions, IMM v4.0 was used to forecast medical events and medical resource utilization. The IMM medical condition incidence values were compared to the actual observed medical event incidence values, and the IMM forecasted medical resource utilization was compared to actual observed medical resource utilization. Qualitative comparisons of these parameters were conducted for both the ISS and STS programs. The results of these analyses will provide validation of IMM v4.0 and reveal areas of the model requiring adjustments to improve the overall accuracy of IMM outputs. This validation effort should result in enhanced credibility of the IMM and improved confidence in the use of IMM as a decision support tool for human space flight.

  10. Generalizing disease management program results: how to get from here to there.

    PubMed

    Linden, Ariel; Adams, John L; Roberts, Nancy

    2004-07-01

    For a disease management (DM) program, the ability to generalize results from the intervention group to the population, to other populations, or to other diseases is as important as demonstrating internal validity. This article provides an overview of the threats to external validity of DM programs, and offers methods to improve the capability for generalizing results obtained through the program. The external validity of DM programs must be evaluated even before program selection and implementation are begun with a prospective new client. Any fundamental differences in characteristics between individuals in an established DM program and in a new population/environment may limit the ability to generalize.

  11. Effect of windowing on lithosphere elastic thickness estimates obtained via the coherence method: Results from northern South America

    NASA Astrophysics Data System (ADS)

    Ojeda, GermáN. Y.; Whitman, Dean

    2002-11-01

    The effective elastic thickness (Te) of the lithosphere is a parameter that describes the flexural strength of a plate. A method routinely used to quantify this parameter is to calculate the coherence between the two-dimensional gravity and topography spectra. Prior to spectra calculation, data grids must be "windowed" in order to avoid edge effects. We investigated the sensitivity of Te estimates obtained via the coherence method to mirroring, Hanning and multitaper windowing techniques on synthetic data as well as on data from northern South America. These analyses suggest that the choice of windowing technique plays an important role in Te estimates and may result in discrepancies of several kilometers depending on the selected windowing method. Te results from mirrored grids tend to be greater than those from Hanning smoothed or multitapered grids. Results obtained from mirrored grids are likely to be over-estimates. This effect may be due to artificial long wavelengths introduced into the data at the time of mirroring. Coherence estimates obtained from three subareas in northern South America indicate that the average effective elastic thickness is in the range of 29-30 km, according to Hanning and multitaper windowed data. Lateral variations across the study area could not be unequivocally determined from this study. We suggest that the resolution of the coherence method does not permit evaluation of small (i.e., ˜5 km), local Te variations. However, the efficiency and robustness of the coherence method in rendering continent-scale estimates of elastic thickness has been confirmed.

  12. An Update on Phased Array Results Obtained on the GE Counter-Rotating Open Rotor Model

    NASA Technical Reports Server (NTRS)

    Podboy, Gary; Horvath, Csaba; Envia, Edmane

    2013-01-01

    Beamform maps have been generated from 1) simulated data generated by the LINPROP code and 2) actual experimental phased array data obtained on the GE Counter-rotating open rotor model. The beamform maps show that many of the tones in the experimental data come from their corresponding Mach radius. If the phased array points to the Mach radius associated with a tone then it is likely that the tone is a result of the loading and thickness noise on the blades. In this case, the phased array correctly points to where the noise is coming from and indicates the axial location of the loudest source in the image but not necessarily the correct vertical location. If the phased array does not point to the Mach radius associated with a tone then some mechanism other than loading and thickness noise may control the amplitude of the tone. In this case, the phased array may or may not point to the actual source. If the source is not rotating it is likely that the phased array points to the source. If the source is rotating it is likely that the phased array indicates the axial location of the loudest source but not necessarily the correct vertical location. These results indicate that you have to be careful in how you interpret phased array data obtained on an open rotor since they may show the tones coming from a location other than the source location. With a subsonic tip speed open rotor the tones can come form locations outboard of the blade tips. This has implications regarding noise shielding.

  13. United Information Services, Inc. , CRAY 1-s/2000, FORTRAN CFT 1. 10. Validation summary report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1983-12-13

    This Validation Summary Report (VSR) for the United Information Services, Inc., FORTRAN CFT 1.10 running under the COS Level C12 1.11 provides a consolidated summary of the results obtained from the validation of the subject compiler against the 1978 FORTRAN Standard (X3.9-1978/FIPS PUB 69). The compiler was validated against the Full Level FORTRAN level of FIPS PUB 69. The VSR is made up of several sections showing all the discrepancies found -if any. These include an overview of the validation which lists all categories of discrepancies within X3.9-1978, and a detailed listing of discrepancies together with the tests which failed.

  14. Towards natural language question generation for the validation of ontologies and mappings.

    PubMed

    Ben Abacha, Asma; Dos Reis, Julio Cesar; Mrabet, Yassine; Pruski, Cédric; Da Silveira, Marcos

    2016-08-08

    The increasing number of open-access ontologies and their key role in several applications such as decision-support systems highlight the importance of their validation. Human expertise is crucial for the validation of ontologies from a domain point-of-view. However, the growing number of ontologies and their fast evolution over time make manual validation challenging. We propose a novel semi-automatic approach based on the generation of natural language (NL) questions to support the validation of ontologies and their evolution. The proposed approach includes the automatic generation, factorization and ordering of NL questions from medical ontologies. The final validation and correction is performed by submitting these questions to domain experts and automatically analyzing their feedback. We also propose a second approach for the validation of mappings impacted by ontology changes. The method exploits the context of the changes to propose correction alternatives presented as Multiple Choice Questions. This research provides a question optimization strategy to maximize the validation of ontology entities with a reduced number of questions. We evaluate our approach for the validation of three medical ontologies. We also evaluate the feasibility and efficiency of our mappings validation approach in the context of ontology evolution. These experiments are performed with different versions of SNOMED-CT and ICD9. The obtained experimental results suggest the feasibility and adequacy of our approach to support the validation of interconnected and evolving ontologies. Results also suggest that taking into account RDFS and OWL entailment helps reducing the number of questions and validation time. The application of our approach to validate mapping evolution also shows the difficulty of adapting mapping evolution over time and highlights the importance of semi-automatic validation.

  15. Development and validation of the short-form Adolescent Health Promotion Scale.

    PubMed

    Chen, Mei-Yen; Lai, Li-Ju; Chen, Hsiu-Chih; Gaete, Jorge

    2014-10-26

    Health-promoting lifestyle choices of adolescents are closely related to current and subsequent health status. However, parsimonious yet reliable and valid screening tools are scarce. The original 40-item adolescent health promotion (AHP) scale was developed by our research team and has been applied to measure adolescent health-promoting behaviors worldwide. The aim of our study was to examine the psychometric properties of a newly developed short-form version of the AHP (AHP-SF) including tests of its reliability and validity. The study was conducted in nine middle and high schools in southern Taiwan. Participants were 814 adolescents randomly divided into two subgroups with equal size and homogeneity of baseline characteristics. The first subsample (calibration sample) was used to modify and shorten the factorial model while the second subsample (validation sample) was utilized to validate the result obtained from the first one. The psychometric testing of the AHP-SF included internal reliability of McDonald's omega and Cronbach's alpha, convergent validity, discriminant validity, and construct validity with confirmatory factor analysis (CFA). The results of the CFA supported a six-factor model and 21 items were retained in the AHP-SF with acceptable model fit. For the discriminant validity test, results indicated that adolescents with lower AHP-SF scores were more likely to be overweight or obese, skip breakfast, and spend more time watching TV and playing computer games. The AHP-SF also showed excellent internal consistency with a McDonald's omega of 0.904 (Cronbach's alpha 0.905) in the calibration group. The current findings suggest that the AHP-SF is a valid and reliable instrument for the evaluation of adolescent health-promoting behaviors. Primary health care providers and clinicians can use the AHP-SF to assess these behaviors and evaluate the outcome of health promotion programs in the adolescent population.

  16. Validity and validation of expert (Q)SAR systems.

    PubMed

    Hulzebos, E; Sijm, D; Traas, T; Posthumus, R; Maslankiewicz, L

    2005-08-01

    At a recent workshop in Setubal (Portugal) principles were drafted to assess the suitability of (quantitative) structure-activity relationships ((Q)SARs) for assessing the hazards and risks of chemicals. In the present study we applied some of the Setubal principles to test the validity of three (Q)SAR expert systems and validate the results. These principles include a mechanistic basis, the availability of a training set and validation. ECOSAR, BIOWIN and DEREK for Windows have a mechanistic or empirical basis. ECOSAR has a training set for each QSAR. For half of the structural fragments the number of chemicals in the training set is >4. Based on structural fragments and log Kow, ECOSAR uses linear regression to predict ecotoxicity. Validating ECOSAR for three 'valid' classes results in predictivity of > or = 64%. BIOWIN uses (non-)linear regressions to predict the probability of biodegradability based on fragments and molecular weight. It has a large training set and predicts non-ready biodegradability well. DEREK for Windows predictions are supported by a mechanistic rationale and literature references. The structural alerts in this program have been developed with a training set of positive and negative toxicity data. However, to support the prediction only a limited number of chemicals in the training set is presented to the user. DEREK for Windows predicts effects by 'if-then' reasoning. The program predicts best for mutagenicity and carcinogenicity. Each structural fragment in ECOSAR and DEREK for Windows needs to be evaluated and validated separately.

  17. Validation of asthma recording in electronic health records: a systematic review

    PubMed Central

    Nissen, Francis; Quint, Jennifer K; Wilkinson, Samantha; Mullerova, Hana; Smeeth, Liam; Douglas, Ian J

    2017-01-01

    Objective To describe the methods used to validate asthma diagnoses in electronic health records and summarize the results of the validation studies. Background Electronic health records are increasingly being used for research on asthma to inform health services and health policy. Validation of the recording of asthma diagnoses in electronic health records is essential to use these databases for credible epidemiological asthma research. Methods We searched EMBASE and MEDLINE databases for studies that validated asthma diagnoses detected in electronic health records up to October 2016. Two reviewers independently assessed the full text against the predetermined inclusion criteria. Key data including author, year, data source, case definitions, reference standard, and validation statistics (including sensitivity, specificity, positive predictive value [PPV], and negative predictive value [NPV]) were summarized in two tables. Results Thirteen studies met the inclusion criteria. Most studies demonstrated a high validity using at least one case definition (PPV >80%). Ten studies used a manual validation as the reference standard; each had at least one case definition with a PPV of at least 63%, up to 100%. We also found two studies using a second independent database to validate asthma diagnoses. The PPVs of the best performing case definitions ranged from 46% to 58%. We found one study which used a questionnaire as the reference standard to validate a database case definition; the PPV of the case definition algorithm in this study was 89%. Conclusion Attaining high PPVs (>80%) is possible using each of the discussed validation methods. Identifying asthma cases in electronic health records is possible with high sensitivity, specificity or PPV, by combining multiple data sources, or by focusing on specific test measures. Studies testing a range of case definitions show wide variation in the validity of each definition, suggesting this may be important for obtaining

  18. Challenges of forest landscape modeling - simulating large landscapes and validating results

    Treesearch

    Hong S. He; Jian Yang; Stephen R. Shifley; Frank R. Thompson

    2011-01-01

    Over the last 20 years, we have seen a rapid development in the field of forest landscape modeling, fueled by both technological and theoretical advances. Two fundamental challenges have persisted since the inception of FLMs: (1) balancing realistic simulation of ecological processes at broad spatial and temporal scales with computing capacity, and (2) validating...

  19. Validating a dance-specific screening test for balance: preliminary results from multisite testing.

    PubMed

    Batson, Glenna

    2010-09-01

    Few dance-specific screening tools adequately capture balance. The aim of this study was to administer and modify the Star Excursion Balance Test (oSEBT) to examine its utility as a balance screen for dancers. The oSEBT involves standing on one leg while lightly targeting with the opposite foot to the farthest distance along eight spokes of a star-shaped grid. This task simulates dance in the spatial pattern and movement quality of the gesturing limb. The oSEBT was validated for distance on athletes with history of ankle sprain. Thirty-three dancers (age 20.1 +/- 1.4 yrs) participated from two contemporary dance conservatories (UK and US), with or without a history of lower extremity injury. Dancers were verbally instructed (without physical demonstration) to execute the oSEBT and four modifications (mSEBT): timed (speed), timed with cognitive interference (answering questions aloud), and sensory disadvantaging (foam mat). Stepping strategies were tracked and performance strategies video-recorded. Unlike the oSEBT results, distances reached were not significant statistically (p = 0.05) or descriptively (i.e., shorter) for either group. Performance styles varied widely, despite sample homogeneity and instructions to control for strategy. Descriptive analysis of mSEBT showed an increased number of near-falls and decreased timing on the injured limb. Dancers appeared to employ variable strategies to keep balance during this test. Quantitative analysis is warranted to define balance strategies for further validation of SEBT modifications to determine its utility as a balance screening tool.

  20. Validation of a laboratory and hospital information system in a medical laboratory accredited according to ISO 15189

    PubMed Central

    Biljak, Vanja Radisic; Ozvald, Ivan; Radeljak, Andrea; Majdenic, Kresimir; Lasic, Branka; Siftar, Zoran; Lovrencic, Marijana Vucic; Flegar-Mestric, Zlata

    2012-01-01

    Introduction The aim of the study was to present a protocol for laboratory information system (LIS) and hospital information system (HIS) validation at the Institute of Clinical Chemistry and Laboratory Medicine of the Merkur University Hospital, Zagreb, Croatia. Materials and methods: Validity of data traceability was checked by entering all test requests for virtual patient into HIS/LIS and printing corresponding barcoded labels that provided laboratory analyzers with the information on requested tests. The original printouts of the test results from laboratory analyzer(s) were compared with the data obtained from LIS and entered into the provided template. Transfer of data from LIS to HIS was examined by requesting all tests in HIS and creating real data in a finding generated in LIS. Data obtained from LIS and HIS were entered into a corresponding template. The main outcome measure was the accuracy of transfer obtained from laboratory analyzers and results transferred from LIS and HIS expressed as percentage (%). Results: The accuracy of data transfer from laboratory analyzers to LIS was 99.5% and of that from LIS to HIS 100%. Conclusion: We presented our established validation protocol for laboratory information system and demonstrated that a system meets its intended purpose. PMID:22384522

  1. Reliability and validity: Part II.

    PubMed

    Davis, Debora Winders

    2004-01-01

    Determining measurement reliability and validity involves complex processes. There is usually room for argument about most instruments. It is important that the researcher clearly describes the processes upon which she made the decision to use a particular instrument, and presents the evidence available showing that the instrument is reliable and valid for the current purposes. In some cases, the researcher may need to conduct pilot studies to obtain evidence upon which to decide whether the instrument is valid for a new population or a different setting. In all cases, the researcher must present a clear and complete explanation for the choices, she has made regarding reliability and validity. The consumer must then judge the degree to which the researcher has provided adequate and theoretically sound rationale. Although I have tried to touch on most of the important concepts related to measurement reliability and validity, it is beyond the scope of this column to be exhaustive. There are textbooks devoted entirely to specific measurement issues if readers require more in-depth knowledge.

  2. Validation of NASA Thermal Ice Protection Computer Codes. Part 1; Program Overview

    NASA Technical Reports Server (NTRS)

    Miller, Dean; Bond, Thomas; Sheldon, David; Wright, William; Langhals, Tammy; Al-Khalil, Kamel; Broughton, Howard

    1996-01-01

    The Icing Technology Branch at NASA Lewis has been involved in an effort to validate two thermal ice protection codes developed at the NASA Lewis Research Center. LEWICE/Thermal (electrothermal deicing & anti-icing), and ANTICE (hot-gas & electrothermal anti-icing). The Thermal Code Validation effort was designated as a priority during a 1994 'peer review' of the NASA Lewis Icing program, and was implemented as a cooperative effort with industry. During April 1996, the first of a series of experimental validation tests was conducted in the NASA Lewis Icing Research Tunnel(IRT). The purpose of the April 96 test was to validate the electrothermal predictive capabilities of both LEWICE/Thermal, and ANTICE. A heavily instrumented test article was designed and fabricated for this test, with the capability of simulating electrothermal de-icing and anti-icing modes of operation. Thermal measurements were then obtained over a range of test conditions, for comparison with analytical predictions. This paper will present an overview of the test, including a detailed description of: (1) the validation process; (2) test article design; (3) test matrix development; and (4) test procedures. Selected experimental results will be presented for de-icing and anti-icing modes of operation. Finally, the status of the validation effort at this point will be summarized. Detailed comparisons between analytical predictions and experimental results are contained in the following two papers: 'Validation of NASA Thermal Ice Protection Computer Codes: Part 2- The Validation of LEWICE/Thermal' and 'Validation of NASA Thermal Ice Protection Computer Codes: Part 3-The Validation of ANTICE'

  3. Azimuthal Signature of Coincidental Brightness Temperature and Normalized Radar Cross-Section Obtained Using Airborne PALS Instrument

    NASA Technical Reports Server (NTRS)

    Colliander, Andreas; Kim, Seungbum; Yueh, Simon; Cosh, Mike; Jackson, Tom; Njoku, Eni

    2010-01-01

    Coincidental airborne brightness temperature (TB) and normalized radar-cross section (NRCS) measurements were carried out with the PALS (Passive and Active L- and S-band) instrument in the SMAPVEX08 (SMAP Validation Experiment 2008) field campaign. This paper describes results obtained from a set of flights which measured a field in 45(sup o) steps over the azimuth angle. The field contained mature soy beans with distinct row structure. The measurement shows that both TB and NRCS experience modulation effects over the azimuth as expected based on the theory. The result is useful in development and validation of land surface parameter forward models and retrieval algorithms, such as the soil moisture algorithm for NASA's SMAP (Soil Moisture Active and Passive) mission. Although the footprint of the SMAP will not be sensitive to the small resolution scale effects as the one presented in this paper, it is nevertheless important to understand the effects at smaller scale.

  4. Validation of 2 commercial Neospora caninum antibody enzyme linked immunosorbent assays

    PubMed Central

    Wu, John T.Y.; Dreger, Sally; Chow, Eva Y.W.; Bowlby, Evelyn E.

    2002-01-01

    Abstract This is a validation study of 2 commercially available enzyme linked immunosorbent assays (ELISA) for the detection of antibodies against Neospora caninum in bovine serum. The results of the reference sera (n = 30) and field sera from an infected beef herd (n = 150) were tested by both ELISAs and the results were compared statistically. When the immunoblotting results of the reference bovine sera were compared to the ELISA results, the same identity score (96.67%) and kappa values (K) (0.93) were obtained for both ELISAs. The sensitivity and specificity values for the IDEXX test were 100% and 93.33% respectively. For the Biovet test 93.33% and 100% were obtained. The corresponding positive (PV+) and negative predictive (PV−) values for the 2 assays were 93.75% and 100% (IDEXX), and 100% and 93.75% (Biovet). In the 2nd study, competitive inhibition ELISA (c-ELISA) results on bovine sera from an infected herd were compared to the 2 sets of ELISA results. The identity scores of the 2 ELISAs were 98% (IDEXX) and 97.33% (Biovet). The K values calculated were 0.96 (IDEXX) and 0.95 (Biovet). For the IDEXX test the sensitivity and specificity were 97.56% and 98.53%, whereas for the Biovet assay 95.12% and 100% were recorded, respectively. The corresponding PV+ and PV− values were 98.77% and 97.1% (IDEXX), and 100% and 94.44% (Biovet). Our validation results showed that the 2 ELISAs worked equally well and there was no statistically significant difference between the performance of the 2 tests. Both tests showed high reproducibility, repeatability and substantial agreement with results from 2 other laboratories. A quality assurance based on the requirement of the ISO/IEC 17025 standards has been adopted throughout this project for test validation procedures. PMID:12418782

  5. Video scrambling for privacy protection in video surveillance: recent results and validation framework

    NASA Astrophysics Data System (ADS)

    Dufaux, Frederic

    2011-06-01

    The issue of privacy in video surveillance has drawn a lot of interest lately. However, thorough performance analysis and validation is still lacking, especially regarding the fulfillment of privacy-related requirements. In this paper, we first review recent Privacy Enabling Technologies (PET). Next, we discuss pertinent evaluation criteria for effective privacy protection. We then put forward a framework to assess the capacity of PET solutions to hide distinguishing facial information and to conceal identity. We conduct comprehensive and rigorous experiments to evaluate the performance of face recognition algorithms applied to images altered by PET. Results show the ineffectiveness of naïve PET such as pixelization and blur. Conversely, they demonstrate the effectiveness of more sophisticated scrambling techniques to foil face recognition.

  6. Validation of a laboratory and hospital information system in a medical laboratory accredited according to ISO 15189.

    PubMed

    Biljak, Vanja Radisic; Ozvald, Ivan; Radeljak, Andrea; Majdenic, Kresimir; Lasic, Branka; Siftar, Zoran; Lovrencic, Marijana Vucic; Flegar-Mestric, Zlata

    2012-01-01

    The aim of the study was to present a protocol for laboratory information system (LIS) and hospital information system (HIS) validation at the Institute of Clinical Chemistry and Laboratory Medicine of the Merkur University Hospital, Zagreb, Croatia. Validity of data traceability was checked by entering all test requests for virtual patient into HIS/LIS and printing corresponding barcoded labels that provided laboratory analyzers with the information on requested tests. The original printouts of the test results from laboratory analyzer(s) were compared with the data obtained from LIS and entered into the provided template. Transfer of data from LIS to HIS was examined by requesting all tests in HIS and creating real data in a finding generated in LIS. Data obtained from LIS and HIS were entered into a corresponding template. The main outcome measure was the accuracy of transfer obtained from laboratory analyzers and results transferred from LIS and HIS expressed as percentage (%). The accuracy of data transfer from laboratory analyzers to LIS was 99.5% and of that from LIS to HIS 100%. We presented our established validation protocol for laboratory information system and demonstrated that a system meets its intended purpose.

  7. The Grand Banks ERS-1 SAR wave spectra validation experiment

    NASA Technical Reports Server (NTRS)

    Vachon, P. W.; Dobson, F. W.; Smith, S. D.; Anderson, R. J.; Buckley, J. R.; Allingham, M.; Vandemark, D.; Walsh, E. J.; Khandekar, M.; Lalbeharry, R.

    1993-01-01

    As part of the ERS-1 validation program, the ERS-1 Synthetic Aperture Radar (SAR) wave spectra validation experiment was carried out over the Grand Banks of Newfoundland (Canada) in Nov. 1991. The principal objective of the experiment was to obtain complete sets of wind and wave data from a variety of calibrated instruments to validate SAR measurements of ocean wave spectra. The field program activities are described and the rather complex wind and wave conditions which were observed are summarized. Spectral comparisons with ERS-1 SAR image spectra are provided. The ERS-1 SAR is shown to have measured swell and range traveling wind seas, but did not measure azimuth traveling wind seas at any time during the experiment. Results of velocity bunching forward mapping and new measurements of the relationship between wind stress and sea state are also shown.

  8. Comparison of Theoretical Stresses and Deflections of Multicell Wings with Experimental Results Obtained from Plastic Models

    NASA Technical Reports Server (NTRS)

    Zender, George W

    1956-01-01

    The experimental deflections and stresses of six plastic multicell-wing models of unswept, delta, and swept plan form are presented and compared with previously published theoretical results obtained by the electrical analog method. The comparisons indicate that the theory is reliable except for the evaluation of stresses in the vicinity of the leading edge of delta wings and the leading and trailing edges of swept wings. The stresses in these regions are questionable, apparently because of simplifications employed in idealizing the actual structure for theoretical purposes and because of local effects of concentrated loads.

  9. In vitro Dermal Absorption of Hydroquinone: Protocol Validation and Applicability on Illegal Skin-Whitening Cosmetics.

    PubMed

    Desmedt, Bart; Ates, Gamze; Courselle, Patricia; De Beer, Jacques O; Rogiers, Vera; Hendrickx, Benoit; Deconinck, Eric; De Paepe, Kristien

    2016-01-01

    In Europe, hydroquinone is a forbidden cosmetic ingredient. It is, however, still abundantly used because of its effective skin-whitening properties. The question arises as to whether the quantities of hydroquinone used become systemically available and may cause damage to human health. Dermal absorption studies can provide this information. In the EU, dermal absorption has to be assessed in vitro since the Cosmetic Regulation 1223/2009/EC forbids the use of animals. To obtain human-relevant data, a Franz diffusion cell protocol was validated using human skin. The results obtained were comparable to those from a multicentre validation study. The protocol was applied to hydroquinone and the dermal absorption ranged between 31 and 44%, which is within the range of published in vivo human values. This shows that a well-validated in vitro dermal absorption study using human skin provides relevant human data. The validated protocol was used to determine the dermal absorption of illegal skin-whitening cosmetics containing hydroquinone. All samples gave high dermal absorption values, rendering them all unsafe for human health. These results add to our knowledge of illegal cosmetics on the EU market, namely that they exhibit a negative toxicological profile and are likely to induce health problems. © 2017 S. Karger AG, Basel.

  10. Validation of model-based deformation correction in image-guided liver surgery via tracked intraoperative ultrasound: preliminary method and results

    NASA Astrophysics Data System (ADS)

    Clements, Logan W.; Collins, Jarrod A.; Wu, Yifei; Simpson, Amber L.; Jarnagin, William R.; Miga, Michael I.

    2015-03-01

    Soft tissue deformation represents a significant error source in current surgical navigation systems used for open hepatic procedures. While numerous algorithms have been proposed to rectify the tissue deformation that is encountered during open liver surgery, clinical validation of the proposed methods has been limited to surface based metrics and sub-surface validation has largely been performed via phantom experiments. Tracked intraoperative ultrasound (iUS) provides a means to digitize sub-surface anatomical landmarks during clinical procedures. The proposed method involves the validation of a deformation correction algorithm for open hepatic image-guided surgery systems via sub-surface targets digitized with tracked iUS. Intraoperative surface digitizations were acquired via a laser range scanner and an optically tracked stylus for the purposes of computing the physical-to-image space registration within the guidance system and for use in retrospective deformation correction. Upon completion of surface digitization, the organ was interrogated with a tracked iUS transducer where the iUS images and corresponding tracked locations were recorded. After the procedure, the clinician reviewed the iUS images to delineate contours of anatomical target features for use in the validation procedure. Mean closest point distances between the feature contours delineated in the iUS images and corresponding 3-D anatomical model generated from the preoperative tomograms were computed to quantify the extent to which the deformation correction algorithm improved registration accuracy. The preliminary results for two patients indicate that the deformation correction method resulted in a reduction in target error of approximately 50%.

  11. JaCVAM-organized international validation study of the in vivo rodent alkaline comet assay for the detection of genotoxic carcinogens: I. Summary of pre-validation study results.

    PubMed

    Uno, Yoshifumi; Kojima, Hajime; Omori, Takashi; Corvi, Raffaella; Honma, Masamistu; Schechtman, Leonard M; Tice, Raymond R; Burlinson, Brian; Escobar, Patricia A; Kraynak, Andrew R; Nakagawa, Yuzuki; Nakajima, Madoka; Pant, Kamala; Asano, Norihide; Lovell, David; Morita, Takeshi; Ohno, Yasuo; Hayashi, Makoto

    2015-07-01

    The in vivo rodent alkaline comet assay (comet assay) is used internationally to investigate the in vivo genotoxic potential of test chemicals. This assay, however, has not previously been formally validated. The Japanese Center for the Validation of Alternative Methods (JaCVAM), with the cooperation of the U.S. NTP Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM)/the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), the European Centre for the Validation of Alternative Methods (ECVAM), and the Japanese Environmental Mutagen Society/Mammalian Mutagenesis Study Group (JEMS/MMS), organized an international validation study to evaluate the reliability and relevance of the assay for identifying genotoxic carcinogens, using liver and stomach as target organs. The ultimate goal of this validation effort was to establish an Organisation for Economic Co-operation and Development (OECD) test guideline. The purpose of the pre-validation studies (i.e., Phase 1 through 3), conducted in four or five laboratories with extensive comet assay experience, was to optimize the protocol to be used during the definitive validation study. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Validated spectrofluorimetric method for the determination of clonazepam in pharmaceutical preparations.

    PubMed

    Ibrahim, Fawzia; El-Enany, Nahed; Shalan, Shereen; Elsharawy, Rasha

    2016-05-01

    A simple, highly sensitive and validated spectrofluorimetric method was applied in the determination of clonazepam (CLZ). The method is based on reduction of the nitro group of clonazepam with zinc/CaCl2, and the product is then reacted with 2-cyanoacetamide (2-CNA) in the presence of ammonia (25%) yielding a highly fluorescent product. The produced fluorophore exhibits strong fluorescence intensity at ʎ(em) = 383 nm after excitation at ʎ(ex) = 333 nm. The method was rectilinear over a concentration range of 0.1-0.5 ng/mL with a limit of detection (LOD) of 0.0057 ng/mL and a limit of quantification (LOQ) of 0.017 ng/mL. The method was fully validated and successfully applied to the determination of CLZ in its tablets with a mean percentage recovery of 100.10 ± 0.75%. Method validation according to ICH Guidelines was evaluated. Statistical analysis of the results obtained using the proposed method was successfully compared with those obtained using a reference method, and there was no significance difference between the two methods in terms of accuracy and precision. Copyright © 2015 John Wiley & Sons, Ltd.

  13. NASA 9-Point LDI Code Validation Experiment

    NASA Technical Reports Server (NTRS)

    Hicks, Yolanda R.; Anderson, Robert C.; Locke, Randy J.

    2007-01-01

    This presentation highlights the experimental work to date to obtain validation data using a 9-point lean direct injector (LDI) in support of the National Combustion Code. The LDI is designed to supply fuel lean, Jet-A and air directly into the combustor such that the liquid fuel atomizes and mixes rapidly to produce short flame zones and produce low levels of oxides of nitrogen and CO. We present NOx and CO emission results from gas sample data that support that aspect of the design concept. We describe this injector and show high speed movies of selected operating points. We present image-based species maps of OH, fuel, CH and NO obtained using planar laser induced fluorescence and chemiluminescence. We also present preliminary 2-component, axial and vertical, velocity vectors of the air flow obtained using particle image velocimetry and of the fuel drops in a combusting case. For the same combusting case, we show preliminary 3-component velocity vectors obtained using a phase Doppler anemometer. For the fueled, combusting cases especially, we found optical density is a technical concern that must be addressed, but that in general, these preliminary results are promising. All optical-based results confirm that this injector produces short flames, typically on the order of 5- to-7-mm long at typical cruise and high power engine cycle conditions.

  14. Validation of Methods to Assess the Immunoglobulin Gene Repertoire in Tissues Obtained from Mice on the International Space Station.

    PubMed

    Rettig, Trisha A; Ward, Claire; Pecaut, Michael J; Chapes, Stephen K

    2017-07-01

    Spaceflight is known to affect immune cell populations. In particular, splenic B cell numbers decrease during spaceflight and in ground-based physiological models. Although antibody isotype changes have been assessed during and after space flight, an extensive characterization of the impact of spaceflight on antibody composition has not been conducted in mice. Next Generation Sequencing and bioinformatic tools are now available to assess antibody repertoires. We can now identify immunoglobulin gene- segment usage, junctional regions, and modifications that contribute to specificity and diversity. Due to limitations on the International Space Station, alternate sample collection and storage methods must be employed. Our group compared Illumina MiSeq sequencing data from multiple sample preparation methods in normal C57Bl/6J mice to validate that sample preparation and storage would not bias the outcome of antibody repertoire characterization. In this report, we also compared sequencing techniques and a bioinformatic workflow on the data output when we assessed the IgH and Igκ variable gene usage. This included assessments of our bioinformatic workflow on Illumina HiSeq and MiSeq datasets and is specifically designed to reduce bias, capture the most information from Ig sequences, and produce a data set that provides other data mining options. We validated our workflow by comparing our normal mouse MiSeq data to existing murine antibody repertoire studies validating it for future antibody repertoire studies.

  15. DES Y1 Results: Validating Cosmological Parameter Estimation Using Simulated Dark Energy Surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacCrann, N.; et al.

    We use mock galaxy survey simulations designed to resemble the Dark Energy Survey Year 1 (DES Y1) data to validate and inform cosmological parameter estimation. When similar analysis tools are applied to both simulations and real survey data, they provide powerful validation tests of the DES Y1 cosmological analyses presented in companion papers. We use two suites of galaxy simulations produced using different methods, which therefore provide independent tests of our cosmological parameter inference. The cosmological analysis we aim to validate is presented in DES Collaboration et al. (2017) and uses angular two-point correlation functions of galaxy number counts and weak lensing shear, as well as their cross-correlation, in multiple redshift bins. While our constraints depend on the specific set of simulated realisations available, for both suites of simulations we find that the input cosmology is consistent with the combined constraints from multiple simulated DES Y1 realizations in themore » $$\\Omega_m-\\sigma_8$$ plane. For one of the suites, we are able to show with high confidence that any biases in the inferred $$S_8=\\sigma_8(\\Omega_m/0.3)^{0.5}$$ and $$\\Omega_m$$ are smaller than the DES Y1 $$1-\\sigma$$ uncertainties. For the other suite, for which we have fewer realizations, we are unable to be this conclusive; we infer a roughly 70% probability that systematic biases in the recovered $$\\Omega_m$$ and $$S_8$$ are sub-dominant to the DES Y1 uncertainty. As cosmological analyses of this kind become increasingly more precise, validation of parameter inference using survey simulations will be essential to demonstrate robustness.« less

  16. Clinical validation of an epigenetic assay to predict negative histopathological results in repeat prostate biopsies.

    PubMed

    Partin, Alan W; Van Neste, Leander; Klein, Eric A; Marks, Leonard S; Gee, Jason R; Troyer, Dean A; Rieger-Christ, Kimberly; Jones, J Stephen; Magi-Galluzzi, Cristina; Mangold, Leslie A; Trock, Bruce J; Lance, Raymond S; Bigley, Joseph W; Van Criekinge, Wim; Epstein, Jonathan I

    2014-10-01

    The DOCUMENT multicenter trial in the United States validated the performance of an epigenetic test as an independent predictor of prostate cancer risk to guide decision making for repeat biopsy. Confirming an increased negative predictive value could help avoid unnecessary repeat biopsies. We evaluated the archived, cancer negative prostate biopsy core tissue samples of 350 subjects from a total of 5 urological centers in the United States. All subjects underwent repeat biopsy within 24 months with a negative (controls) or positive (cases) histopathological result. Centralized blinded pathology evaluation of the 2 biopsy series was performed in all available subjects from each site. Biopsies were epigenetically profiled for GSTP1, APC and RASSF1 relative to the ACTB reference gene using quantitative methylation specific polymerase chain reaction. Predetermined analytical marker cutoffs were used to determine assay performance. Multivariate logistic regression was used to evaluate all risk factors. The epigenetic assay resulted in a negative predictive value of 88% (95% CI 85-91). In multivariate models correcting for age, prostate specific antigen, digital rectal examination, first biopsy histopathological characteristics and race the test proved to be the most significant independent predictor of patient outcome (OR 2.69, 95% CI 1.60-4.51). The DOCUMENT study validated that the epigenetic assay was a significant, independent predictor of prostate cancer detection in a repeat biopsy collected an average of 13 months after an initial negative result. Due to its 88% negative predictive value adding this epigenetic assay to other known risk factors may help decrease unnecessary repeat prostate biopsies. Copyright © 2014 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  17. SCIAMACHY validation by aircraft remote measurements: design, execution, and first results of the SCIA-VALUE mission

    NASA Astrophysics Data System (ADS)

    Fix, A.; Ehret, G.; Flentje, H.; Poberaj, G.; Gottwald, M.; Finkenzeller, H.; Bremer, H.; Bruns, M.; Burrows, J. P.; Kleinböhl, A.; Küllmann, H.; Kuttippurath, J.; Richter, A.; Wang, P.; Heue, K.-P.; Platt, U.; Wagner, T.

    2004-12-01

    For the first time three different remote sensing instruments - a sub-millimeter radiometer, a differential optical absorption spectrometer in the UV-visible spectral range, and a lidar - were deployed aboard DLR's meteorological research aircraft Falcon 20 to validate a large number of SCIAMACHY level 2 and off-line data products such as O3, NO2, N2O, BrO, OClO, H2O, aerosols, and clouds. Within two main validation campaigns of the SCIA-VALUE mission (SCIAMACHY VALidation and Utilization Experiment) extended latitudinal cross-sections stretching from polar regions to the tropics as well as longitudinal cross sections at polar latitudes at about 70° N and the equator have been generated. This contribution gives an overview over the campaigns performed and reports on the observation strategy for achieving the validation goals. We also emphasize the synergetic use of the novel set of aircraft instrumentation and the usefulness of this innovative suite of remote sensing instruments for satellite validation.

  18. Evaluation of biologic occupational risk control practices: quality indicators development and validation.

    PubMed

    Takahashi, Renata Ferreira; Gryschek, Anna Luíza F P L; Izumi Nichiata, Lúcia Yasuko; Lacerda, Rúbia Aparecida; Ciosak, Suely Itsuko; Gir, Elucir; Padoveze, Maria Clara

    2010-05-01

    There is growing demand for the adoption of qualification systems for health care practices. This study is aimed at describing the development and validation of indicators for evaluation of biologic occupational risk control programs. The study involved 3 stages: (1) setting up a research team, (2) development of indicators, and (3) validation of the indicators by a team of specialists recruited to validate each attribute of the developed indicators. The content validation method was used for the validation, and a psychometric scale was developed for the specialists' assessment. A consensus technique was used, and every attribute that obtained a Content Validity Index of at least 0.75 was approved. Eight indicators were developed for the evaluation of the biologic occupational risk prevention program, with emphasis on accidents caused by sharp instruments and occupational tuberculosis prevention. The indicators included evaluation of the structure, process, and results at the prevention and biologic risk control levels. The majority of indicators achieved a favorable consensus regarding all validated attributes. The developed indicators were considered validated, and the method used for construction and validation proved to be effective. Copyright (c) 2010 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.

  19. Validation of the Practice Environment Scale to the Brazilian culture.

    PubMed

    Gasparino, Renata C; Guirardello, Edinêis de B

    2017-07-01

    To validate the Brazilian version of the Practice Environment Scale. The Practice Environment Scale is a tool that evaluates the presence of characteristics that are favourable for professional nursing practice because a better work environment contributes to positive results for patients, professionals and institutions. Methodological study including 209 nurses. Validity was assessed via a confirmatory factor analysis using structural equation modelling, in which the correlations between the instrument and the following variables were tested: burnout, job satisfaction, safety climate, perception of quality of care and intention to leave the job. Subgroups were compared and the reliability was assessed using Cronbach's alpha and the composite reliability. Factor analysis resulted in exclusion of seven items. Significant correlations were obtained between the subscales and all variables in the study. The reliability was considered acceptable. The Brazilian version of the Practice Environment Scale is a valid and reliable tool used to assess the characteristics that promote professional nursing practice. Use of this tool in Brazilian culture should allow managers to implement changes that contribute to the achievement of better results, in addition to identifying and comparing the environments of health institutions. © 2017 John Wiley & Sons Ltd.

  20. Reliability and Validity of the Dyadic Observed Communication Scale (DOCS).

    PubMed

    Hadley, Wendy; Stewart, Angela; Hunter, Heather L; Affleck, Katelyn; Donenberg, Geri; Diclemente, Ralph; Brown, Larry K

    2013-02-01

    We evaluated the reliability and validity of the Dyadic Observed Communication Scale (DOCS) coding scheme, which was developed to capture a range of communication components between parents and adolescents. Adolescents and their caregivers were recruited from mental health facilities for participation in a large, multi-site family-based HIV prevention intervention study. Seventy-one dyads were randomly selected from the larger study sample and coded using the DOCS at baseline. Preliminary validity and reliability of the DOCS was examined using various methods, such as comparing results to self-report measures and examining interrater reliability. Results suggest that the DOCS is a reliable and valid measure of observed communication among parent-adolescent dyads that captures both verbal and nonverbal communication behaviors that are typical intervention targets. The DOCS is a viable coding scheme for use by researchers and clinicians examining parent-adolescent communication. Coders can be trained to reliably capture individual and dyadic components of communication for parents and adolescents and this complex information can be obtained relatively quickly.

  1. Content validity and reliability of test of gross motor development in Chilean children

    PubMed Central

    Cano-Cappellacci, Marcelo; Leyton, Fernanda Aleitte; Carreño, Joshua Durán

    2016-01-01

    ABSTRACT OBJECTIVE To validate a Spanish version of the Test of Gross Motor Development (TGMD-2) for the Chilean population. METHODS Descriptive, transversal, non-experimental validity and reliability study. Four translators, three experts and 92 Chilean children, from five to 10 years, students from a primary school in Santiago, Chile, have participated. The Committee of Experts has carried out translation, back-translation and revision processes to determine the translinguistic equivalence and content validity of the test, using the content validity index in 2013. In addition, a pilot implementation was achieved to determine test reliability in Spanish, by using the intraclass correlation coefficient and Bland-Altman method. We evaluated whether the results presented significant differences by replacing the bat with a racket, using T-test. RESULTS We obtained a content validity index higher than 0.80 for language clarity and relevance of the TGMD-2 for children. There were significant differences in the object control subtest when comparing the results with bat and racket. The intraclass correlation coefficient for reliability inter-rater, intra-rater and test-retest reliability was greater than 0.80 in all cases. CONCLUSIONS The TGMD-2 has appropriate content validity to be applied in the Chilean population. The reliability of this test is within the appropriate parameters and its use could be recommended in this population after the establishment of normative data, setting a further precedent for the validation in other Latin American countries. PMID:26815160

  2. [Comparison of the Wechsler Memory Scale-III and the Spain-Complutense Verbal Learning Test in acquired brain injury: construct validity and ecological validity].

    PubMed

    Luna-Lario, P; Pena, J; Ojeda, N

    2017-04-16

    To perform an in-depth examination of the construct validity and the ecological validity of the Wechsler Memory Scale-III (WMS-III) and the Spain-Complutense Verbal Learning Test (TAVEC). The sample consists of 106 adults with acquired brain injury who were treated in the Area of Neuropsychology and Neuropsychiatry of the Complejo Hospitalario de Navarra and displayed memory deficit as the main sequela, measured by means of specific memory tests. The construct validity is determined by examining the tasks required in each test over the basic theoretical models, comparing the performance according to the parameters offered by the tests, contrasting the severity indices of each test and analysing their convergence. The external validity is explored through the correlation between the tests and by using regression models. According to the results obtained, both the WMS-III and the TAVEC have construct validity. The TAVEC is more sensitive and captures not only the deficits in mnemonic consolidation, but also in the executive functions involved in memory. The working memory index of the WMS-III is useful for predicting the return to work at two years after the acquired brain injury, but none of the instruments anticipates the disability and dependence at least six months after the injury. We reflect upon the construct validity of the tests and their insufficient capacity to predict functionality when the sequelae become chronic.

  3. CosmoQuest:Using Data Validation for More Than Just Data Validation

    NASA Astrophysics Data System (ADS)

    Lehan, C.; Gay, P.

    2016-12-01

    It is often taken for granted that different scientists completing the same task (e.g. mapping geologic features) will get the same results, and data validation is often skipped or under-utilized due to time and funding constraints. Robbins et. al (2014), however, demonstrated that this is a needed step, as large variation can exist even among collaborating team members completing straight-forward tasks like marking craters. Data Validation should be much more than a simple post-project verification of results. The CosmoQuest virtual research facility employs regular data-validation for a variety of benefits, including real-time user feedback, real-time tracking to observe user activity while it's happening, and using pre-solved data to analyze users' progress and to help them retain skills. Some creativity in this area can drastically improve project results. We discuss methods of validating data in citizen science projects and outline the variety of uses for validation, which, when used properly, improves the scientific output of the project and the user experience for the citizens doing the work. More than just a tool for scientists, validation can assist users in both learning and retaining important information and skills, improving the quality and quantity of data gathered. Real-time analysis of user data can give key information in the effectiveness of the project that a broad glance would miss, and properly presenting that analysis is vital. Training users to validate their own data, or the data of others, can significantly improve the accuracy of misinformed or novice users.

  4. Injury surveillance in community sport: Can we obtain valid data from sports trainers?

    PubMed

    Ekegren, C L; Gabbe, B J; Finch, C F

    2015-06-01

    A lack of available injury data on community sports participants has hampered the development of informed preventive strategies for the broad-base of sports participation. In community sports settings, sports trainers or first-aiders are well-placed to carry out injury surveillance, but few studies have evaluated their ability to do so. The aim of this study was to investigate the reporting rate and completeness of sports trainers' injury records and agreement between sports trainers' and players' reports of injury in community Australian football. Throughout the football season, one sports trainer from each of four clubs recorded players' injuries. To validate these data, we collected self-reported injury data from players via short message service (SMS). In total, 210 discrete injuries were recorded for 139 players, 21% by sports trainers only, 59% by players via SMS only, and 21% by both. Completeness of injury records ranged from 95% to 100%. Agreement between sports trainers and players ranged from K = 0.32 (95% confidence interval: 0.27, 0.37) for date of return to football to K = 1.00 for activity when injured. Injury data collected by sports trainers may be of adequate quality for providing an understanding of the profile of injuries. However, data are likely to underestimate injury rates and should be interpreted with caution. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  5. The relative ease of obtaining a dermatologic appointment in Boston: how methods drive results.

    PubMed

    Weingold, David Howard; Lack, Michael Dweight; Yanowitz, Karen Leslie

    2009-06-01

    Recent reports have indicated long wait times for dermatologic appointments even for changing moles. Our objective was to determine the wait time for a person willing to make multiple calls and accept an appointment from any dermatologist at any satellite location for a changing mole from a dermatologist who advertised in a Boston, MA, telephone book. We telephoned each practice listed in a Boston, MA, telephone book. Patients making one call to each dermatologic practice on average obtained an appointment in 18 days. Patients calling two practices were offered an appointment on average in 7 days. Patients calling 3 practices were also offered an appointment in 1 week. We only telephoned practices listed in a Boston, MA, telephone book and we only surveyed one urban area. These results suggest that a reasonable concerned patient who was willing to make multiple calls to different providers in Boston, MA, can be seen in a timely fashion.

  6. Development and validation of microsatellite markers for Brachiaria ruziziensis obtained by partial genome assembly of Illumina single-end reads

    PubMed Central

    2013-01-01

    Background Brachiaria ruziziensis is one of the most important forage species planted in the tropics. The application of genomic tools to aid the selection of superior genotypes can provide support to B. ruziziensis breeding programs. However, there is a complete lack of information about the B. ruziziensis genome. Also, the availability of genomic tools, such as molecular markers, to support B. ruziziensis breeding programs is rather limited. Recently, next-generation sequencing technologies have been applied to generate sequence data for the identification of microsatellite regions and primer design. In this study, we present a first validated set of SSR markers for Brachiaria ruziziensis, selected from a de novo partial genome assembly of single-end Illumina reads. Results A total of 85,567 perfect microsatellite loci were detected in contigs with a minimum 10X coverage. We selected a set of 500 microsatellite loci identified in contigs with minimum 100X coverage for primer design and synthesis, and tested a subset of 269 primer pairs, 198 of which were polymorphic on 11 representative B. ruziziensis accessions. Descriptive statistics for these primer pairs are presented, as well as estimates of marker transferability to other relevant brachiaria species. Finally, a set of 11 multiplex panels containing the 30 most informative markers was validated and proposed for B. ruziziensis genetic analysis. Conclusions We show that the detection and development of microsatellite markers from genome assembled Illumina single-end DNA sequences is highly efficient. The developed markers are readily suitable for genetic analysis and marker assisted selection of Brachiaria ruziziensis. The use of this approach for microsatellite marker development is promising for species with limited genomic information, whose breeding programs would benefit from the use of genomic tools. To our knowledge, this is the first set of microsatellite markers developed for this important species

  7. [MusiQol: international questionnaire investigating quality of life in multiple sclerosis: validation results for the German subpopulation in an international comparison].

    PubMed

    Flachenecker, P; Vogel, U; Simeoni, M C; Auquier, P; Rieckmann, P

    2011-10-01

    The existing health-related quality of life questionnaires on multiple sclerosis (MS) only partially reflect the patient's point of view on the reduction of activities of daily living. Their development and validation was not performed in different languages. That is what prompted the development of the Multiple Sclerosis International Quality of Life (MusiQoL) Questionnaire as an international multidimensional measurement instrument. This paper presents this new development and the results of the German subgroup versus the total international sample. A total of 1,992 MS patients from 15 countries, including 209 German patients, took part in the study between January 2004 and February 2005. The patients took the MusiQoL survey at baseline and at 21±7 days as well as completing a symptom-related checklist and the SF-36 short form survey. Demographics, history and MS classification data were also generated. Reproducibility, sensitivity, convergent and discriminant validity were analysed. Convergent and discriminant validity and reproducibility were satisfactory for all dimensions of the MusiQoL. The dimensional scores correlated moderately but significantly with the SF-36 scores, but showed a discriminant validity in terms of gender, socioeconomic status and health status that was more pronounced in the overall population than in the German subpopulation. The highest correlations were observed between the MusiQoL dimension of activities of daily living and the Expanded Disability Status Scale (EDSS). The results of this study confirm the validity and reliability of MusiQoL as an instrument for measuring the quality of life of German and international MS patients.

  8. Quantitative assessment of the impact of biomedical image acquisition on the results obtained from image analysis and processing.

    PubMed

    Koprowski, Robert

    2014-07-04

    Dedicated, automatic algorithms for image analysis and processing are becoming more and more common in medical diagnosis. When creating dedicated algorithms, many factors must be taken into consideration. They are associated with selecting the appropriate algorithm parameters and taking into account the impact of data acquisition on the results obtained. An important feature of algorithms is the possibility of their use in other medical units by other operators. This problem, namely operator's (acquisition) impact on the results obtained from image analysis and processing, has been shown on a few examples. The analysed images were obtained from a variety of medical devices such as thermal imaging, tomography devices and those working in visible light. The objects of imaging were cellular elements, the anterior segment and fundus of the eye, postural defects and others. In total, almost 200'000 images coming from 8 different medical units were analysed. All image analysis algorithms were implemented in C and Matlab. For various algorithms and methods of medical imaging, the impact of image acquisition on the results obtained is different. There are different levels of algorithm sensitivity to changes in the parameters, for example: (1) for microscope settings and the brightness assessment of cellular elements there is a difference of 8%; (2) for the thyroid ultrasound images there is a difference in marking the thyroid lobe area which results in a brightness assessment difference of 2%. The method of image acquisition in image analysis and processing also affects: (3) the accuracy of determining the temperature in the characteristic areas on the patient's back for the thermal method - error of 31%; (4) the accuracy of finding characteristic points in photogrammetric images when evaluating postural defects - error of 11%; (5) the accuracy of performing ablative and non-ablative treatments in cosmetology - error of 18% for the nose, 10% for the cheeks, and 7% for the

  9. An individual and dynamic Body Segment Inertial Parameter validation method using ground reaction forces.

    PubMed

    Hansen, Clint; Venture, Gentiane; Rezzoug, Nasser; Gorce, Philippe; Isableu, Brice

    2014-05-07

    Over the last decades a variety of research has been conducted with the goal to improve the Body Segment Inertial Parameters (BSIP) estimations but to our knowledge a real validation has never been completely successful, because no ground truth is available. The aim of this paper is to propose a validation method for a BSIP identification method (IM) and to confirm the results by comparing them with recalculated contact forces using inverse dynamics to those obtained by a force plate. Furthermore, the results are compared with the recently proposed estimation method by Dumas et al. (2007). Additionally, the results are cross validated with a high velocity overarm throwing movement. Throughout conditions higher correlations, smaller metrics and smaller RMSE can be found for the proposed BSIP estimation (IM) which shows its advantage compared to recently proposed methods as of Dumas et al. (2007). The purpose of the paper is to validate an already proposed method and to show that this method can be of significant advantage compared to conventional methods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Evaluation of a methodology to validate National Death Index retrieval results among a cohort of U.S. service members.

    PubMed

    Skopp, Nancy A; Smolenski, Derek J; Schwesinger, Daniel A; Johnson, Christopher J; Metzger-Abamukong, Melinda J; Reger, Mark A

    2017-06-01

    Accurate knowledge of the vital status of individuals is critical to the validity of mortality research. National Death Index (NDI) and NDI-Plus are comprehensive epidemiological resources for mortality ascertainment and cause of death data that require additional user validation. Currently, there is a gap in methods to guide validation of NDI search results rendered for active duty service members. The purpose of this research was to adapt and evaluate the CDC National Program of Cancer Registries (NPCR) algorithm for mortality ascertainment in a large military cohort. We adapted and applied the NPCR algorithm to a cohort of 7088 service members on active duty at the time of death at some point between 2001 and 2009. We evaluated NDI validity and NDI-Plus diagnostic agreement against the Department of Defense's Armed Forces Medical Examiner System (AFMES). The overall sensitivity of the NDI to AFMES records after the application of the NPCR algorithm was 97.1%. Diagnostic estimates of measurement agreement between the NDI-Plus and the AFMES cause of death groups were high. The NDI and NDI-Plus can be successfully used with the NPCR algorithm to identify mortality and cause of death among active duty military cohort members who die in the United States. Published by Elsevier Inc.

  11. Radiative transfer model validations during the First ISLSCP Field Experiment

    NASA Technical Reports Server (NTRS)

    Frouin, Robert; Breon, Francois-Marie; Gautier, Catherine

    1990-01-01

    Two simple radiative transfer models, the 5S model based on Tanre et al. (1985, 1986) and the wide-band model of Morcrette (1984) are validated by comparing their outputs with results obtained during the First ISLSCP Field Experiment on concomitant radiosonde, aerosol turbidity, and radiation measurements and sky photographs. Results showed that the 5S model overestimates the short-wave irradiance by 13.2 W/sq m, whereas the Morcrette model underestimated the long-wave irradiance by 7.4 W/sq m.

  12. Collocation mismatch uncertainties in satellite aerosol retrieval validation

    NASA Astrophysics Data System (ADS)

    Virtanen, Timo H.; Kolmonen, Pekka; Sogacheva, Larisa; Rodríguez, Edith; Saponaro, Giulia; de Leeuw, Gerrit

    2018-02-01

    Satellite-based aerosol products are routinely validated against ground-based reference data, usually obtained from sun photometer networks such as AERONET (AEROsol RObotic NETwork). In a typical validation exercise a spatial sample of the instantaneous satellite data is compared against a temporal sample of the point-like ground-based data. The observations do not correspond to exactly the same column of the atmosphere at the same time, and the representativeness of the reference data depends on the spatiotemporal variability of the aerosol properties in the samples. The associated uncertainty is known as the collocation mismatch uncertainty (CMU). The validation results depend on the sampling parameters. While small samples involve less variability, they are more sensitive to the inevitable noise in the measurement data. In this paper we study systematically the effect of the sampling parameters in the validation of AATSR (Advanced Along-Track Scanning Radiometer) aerosol optical depth (AOD) product against AERONET data and the associated collocation mismatch uncertainty. To this end, we study the spatial AOD variability in the satellite data, compare it against the corresponding values obtained from densely located AERONET sites, and assess the possible reasons for observed differences. We find that the spatial AOD variability in the satellite data is approximately 2 times larger than in the ground-based data, and the spatial variability correlates only weakly with that of AERONET for short distances. We interpreted that only half of the variability in the satellite data is due to the natural variability in the AOD, and the rest is noise due to retrieval errors. However, for larger distances (˜ 0.5°) the correlation is improved as the noise is averaged out, and the day-to-day changes in regional AOD variability are well captured. Furthermore, we assess the usefulness of the spatial variability of the satellite AOD data as an estimate of CMU by comparing the

  13. Reliability and validity of gait analysis by android-based smartphone.

    PubMed

    Nishiguchi, Shu; Yamada, Minoru; Nagai, Koutatsu; Mori, Shuhei; Kajiwara, Yuu; Sonoda, Takuya; Yoshimura, Kazuya; Yoshitomi, Hiroyuki; Ito, Hiromu; Okamoto, Kazuya; Ito, Tatsuaki; Muto, Shinyo; Ishihara, Tatsuya; Aoyama, Tomoki

    2012-05-01

    Smartphones are very common devices in daily life that have a built-in tri-axial accelerometer. Similar to previously developed accelerometers, smartphones can be used to assess gait patterns. However, few gait analyses have been performed using smartphones, and their reliability and validity have not been evaluated yet. The purpose of this study was to evaluate the reliability and validity of a smartphone accelerometer. Thirty healthy young adults participated in this study. They walked 20 m at their preferred speeds, and their trunk accelerations were measured using a smartphone and a tri-axial accelerometer that was secured over the L3 spinous process. We developed a gait analysis application and installed it in the smartphone to measure the acceleration. After signal processing, we calculated the gait parameters of each measurement terminal: peak frequency (PF), root mean square (RMS), autocorrelation peak (AC), and coefficient of variance (CV) of the acceleration peak intervals. Remarkable consistency was observed in the test-retest reliability of all the gait parameter results obtained by the smartphone (p<0.001). All the gait parameter results obtained by the smartphone showed statistically significant and considerable correlations with the same parameter results obtained by the tri-axial accelerometer (PF r=0.99, RMS r=0.89, AC r=0.85, CV r=0.82; p<0.01). Our study indicates that the smartphone with gait analysis application used in this study has the capacity to quantify gait parameters with a degree of accuracy that is comparable to that of the tri-axial accelerometer.

  14. Five-Factor Screener in the 2005 National Health Interview Survey Cancer Control Supplement: Validation Results

    Cancer.gov

    Risk Factor Assessment Branch staff have assessed indirectly the validity of parts of the Five-Factor Screener in two studies: NCI's Observing Protein and Energy (OPEN) Study and the Eating at America's Table Study (EATS). In both studies, multiple 24-hour recalls in conjunction with a measurement error model were used to assess validity.

  15. Accuracy assessment/validation methodology and results of 2010–11 land-cover/land-use data for Pools 13, 26, La Grange, and Open River South, Upper Mississippi River System

    USGS Publications Warehouse

    Jakusz, J.W.; Dieck, J.J.; Langrehr, H.A.; Ruhser, J.J.; Lubinski, S.J.

    2016-01-11

    Similar to an AA, validation involves generating random points based on the total area for each map class. However, instead of collecting field data, two or three individuals not involved with the photo-interpretative mapping separately review each of the points onscreen and record a best-fit vegetation type(s) for each site. Once the individual analyses are complete, results are joined together and a comparative analysis is performed. The objective of this initial analysis is to identify areas where the validation results were in agreement (matches) and areas where validation results were in disagreement (mismatches). The two or three individuals then perform an analysis, looking at each mismatched site, and agree upon a final validation class. (If two vegetation types at a specific site appear to be equally prevalent, the validation team is permitted to assign the site two best-fit vegetation types.) Following the validation team’s comparative analysis of vegetation assignments, the data are entered into a database and compared to the mappers’ vegetation assignments. Agreements and disagreements between the map and validation classes are identified, and a contingency table is produced. This document presents the AA processes/results for Pools 13 and La Grange, as well as the validation process/results for Pools 13 and 26 and Open River South.

  16. Numerical validation of selected computer programs in nonlinear analysis of steel frame exposed to fire

    NASA Astrophysics Data System (ADS)

    Maślak, Mariusz; Pazdanowski, Michał; Woźniczka, Piotr

    2018-01-01

    Validation of fire resistance for the same steel frame bearing structure is performed here using three different numerical models, i.e. a bar one prepared in the SAFIR environment, and two 3D models developed within the framework of Autodesk Simulation Mechanical (ASM) and an alternative one developed in the environment of the Abaqus code. The results of the computer simulations performed are compared with the experimental results obtained previously, in a laboratory fire test, on a structure having the same characteristics and subjected to the same heating regimen. Comparison of the experimental and numerically determined displacement evolution paths for selected nodes of the considered frame during the simulated fire exposure constitutes the basic criterion applied to evaluate the validity of the numerical results obtained. The experimental and numerically determined estimates of critical temperature specific to the considered frame and related to the limit state of bearing capacity in fire have been verified as well.

  17. Expert system verification and validation survey. Delivery 2: Survey results

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The purpose is to determine the state-of-the-practice in Verification and Validation (V and V) of Expert Systems (ESs) on current NASA and industry applications. This is the first task of the series which has the ultimate purpose of ensuring that adequate ES V and V tools and techniques are available for Space Station Knowledge Based Systems development. The strategy for determining the state-of-the-practice is to check how well each of the known ES V and V issues are being addressed and to what extent they have impacted the development of ESs.

  18. Statistical validation of normal tissue complication probability models.

    PubMed

    Xu, Cheng-Jian; van der Schaaf, Arjen; Van't Veld, Aart A; Langendijk, Johannes A; Schilstra, Cornelis

    2012-09-01

    To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Saturn gravity results obtained from Pioneer 11 tracking data and earth-based Saturn satellite data

    NASA Technical Reports Server (NTRS)

    Null, G. W.; Lau, E. L.; Biller, E. D.; Anderson, J. D.

    1981-01-01

    Improved gravity coefficients for Saturn, its satellites and rings are calculated on the basis of a combination of Pioneer 11 spacecraft Doppler tracking data and earth-based determinations of Saturn natural satellite apse and node rates. Solutions are first obtained separately from the coherent Doppler tracking data obtained for the interval from August 20 to September 4, surrounding the time of closest approach, with the effects of solar plasma on radio signal propagation taken into account, and from secular rates for Mimas, Enceladus, Tethys, Dione, Rhea and Titan determined from astrometric data by Kozai (1957, 1976) and Garcia (1972). Combination of the data by the use of the Pioneer solution and corresponding unadjusted covariance matrix as a priori information for a secular rate analysis results in values for the total ring mass of essentially zero at a standard error level of 1.7 x 10 to the -6th Saturn masses, a ratio of solar mass to that of the Saturn system of 3498.09 + or - 0.22, masses of Rhea, Titan and Iapetus of 4.0 + or - 0.9, 238.8 + or - 3, and 3.4 + or - 1.3 x 10 to the -6th Saturn masses, respectively, and second and fourth zonal harmonics of 16,479 + or - 18 and -937 + or - 38, respectively. The harmonic coefficients are noted to be important as boundary conditions in the modeling of the Saturn interior.

  20. Furthering our Understanding of Land Surface Interactions using SVAT modelling: Results from SimSphere's Validation

    NASA Astrophysics Data System (ADS)

    North, Matt; Petropoulos, George; Ireland, Gareth; Rendal, Daisy; Carlson, Toby

    2015-04-01

    With current predicted climate change, there is an increased requirement to gain knowledge on the terrestrial biosphere, for numerous agricultural, hydrological and meteorological applications. To this end, Soil Vegetation Atmospheric Transfer (SVAT) models are quickly becoming the preferred scientific tool to monitor, at fine temporal and spatial resolutions, detailed information on numerous parameters associated with Earth system interactions. Validation of any model is critical to assess its accuracy, generality and realism to distinctive ecosystems and subsequently acts as important step before its operational distribution. In this study, the SimSphere SVAT model has been validated to fifteen different sites of the FLUXNET network, where model performance was statistically evaluated by directly comparing the model predictions vs in situ data, for cloud free days with a high energy balance closure. Specific focus is given to the models ability to simulate parameters associated with the energy balance, namely Shortwave Incoming Solar Radiation (Rg), Net Radiation (Rnet), Latent Heat (LE), Sensible Heat (H), Air Temperature at 1.3m (Tair 1.3m) and Air temperature at 50m (Tair 50m). Comparisons were performed for a number distinctive ecosystem types and for 150 days in total using in-situ data from ground observational networks acquired from the year 2011 alone. Evaluation of the models' coherence to reality was evaluated on the basis of a series of statistical parameters including RMSD, R2, Scatter, Bias, MAE , NASH index, Slope and Intercept. Results showed good to very good agreement between predicted and observed datasets, particularly so for LE, H, Tair 1.3m and Tair 50m where mean error distribution values indicated excellent model performance. Due to the systematic underestimation, poorer simulation accuracies were exhibited for Rg and Rnet, yet all values reported are still analogous to other validatory studies of its kind. In overall, the model

  1. Simulation of magnetic island dynamics under resonant magnetic perturbation with the TEAR code and validation of the results on T-10 tokamak data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ivanov, N. V.; Kakurin, A. M.

    2014-10-15

    Simulation of the magnetic island evolution under Resonant Magnetic Perturbation (RMP) in rotating T-10 tokamak plasma is presented with intent of TEAR code experimental validation. In the T-10 experiment chosen for simulation, the RMP consists of a stationary error field, a magnetic field of the eddy current in the resistive vacuum vessel and magnetic field of the externally applied controlled halo current in the plasma scrape-off layer (SOL). The halo-current loop consists of a rail limiter, plasma SOL, vacuum vessel, and external part of the circuit. Effects of plasma resistivity, viscosity, and RMP are taken into account in the TEARmore » code based on the two-fluid MHD approximation. Radial distribution of the magnetic flux perturbation is calculated with account of the externally applied RMP. A good agreement is obtained between the simulation results and experimental data for the cases of preprogrammed and feedback-controlled halo current in the plasma SOL.« less

  2. Examining construct and predictive validity of the Health-IT Usability Evaluation Scale: confirmatory factor analysis and structural equation modeling results.

    PubMed

    Yen, Po-Yin; Sousa, Karen H; Bakken, Suzanne

    2014-10-01

    In a previous study, we developed the Health Information Technology Usability Evaluation Scale (Health-ITUES), which is designed to support customization at the item level. Such customization matches the specific tasks/expectations of a health IT system while retaining comparability at the construct level, and provides evidence of its factorial validity and internal consistency reliability through exploratory factor analysis. In this study, we advanced the development of Health-ITUES to examine its construct validity and predictive validity. The health IT system studied was a web-based communication system that supported nurse staffing and scheduling. Using Health-ITUES, we conducted a cross-sectional study to evaluate users' perception toward the web-based communication system after system implementation. We examined Health-ITUES's construct validity through first and second order confirmatory factor analysis (CFA), and its predictive validity via structural equation modeling (SEM). The sample comprised 541 staff nurses in two healthcare organizations. The CFA (n=165) showed that a general usability factor accounted for 78.1%, 93.4%, 51.0%, and 39.9% of the explained variance in 'Quality of Work Life', 'Perceived Usefulness', 'Perceived Ease of Use', and 'User Control', respectively. The SEM (n=541) supported the predictive validity of Health-ITUES, explaining 64% of the variance in intention for system use. The results of CFA and SEM provide additional evidence for the construct and predictive validity of Health-ITUES. The customizability of Health-ITUES has the potential to support comparisons at the construct level, while allowing variation at the item level. We also illustrate application of Health-ITUES across stages of system development. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  3. Assessment of predictive performance in incomplete data by combining internal validation and multiple imputation.

    PubMed

    Wahl, Simone; Boulesteix, Anne-Laure; Zierer, Astrid; Thorand, Barbara; van de Wiel, Mark A

    2016-10-26

    Missing values are a frequent issue in human studies. In many situations, multiple imputation (MI) is an appropriate missing data handling strategy, whereby missing values are imputed multiple times, the analysis is performed in every imputed data set, and the obtained estimates are pooled. If the aim is to estimate (added) predictive performance measures, such as (change in) the area under the receiver-operating characteristic curve (AUC), internal validation strategies become desirable in order to correct for optimism. It is not fully understood how internal validation should be combined with multiple imputation. In a comprehensive simulation study and in a real data set based on blood markers as predictors for mortality, we compare three combination strategies: Val-MI, internal validation followed by MI on the training and test parts separately, MI-Val, MI on the full data set followed by internal validation, and MI(-y)-Val, MI on the full data set omitting the outcome followed by internal validation. Different validation strategies, including bootstrap und cross-validation, different (added) performance measures, and various data characteristics are considered, and the strategies are evaluated with regard to bias and mean squared error of the obtained performance estimates. In addition, we elaborate on the number of resamples and imputations to be used, and adopt a strategy for confidence interval construction to incomplete data. Internal validation is essential in order to avoid optimism, with the bootstrap 0.632+ estimate representing a reliable method to correct for optimism. While estimates obtained by MI-Val are optimistically biased, those obtained by MI(-y)-Val tend to be pessimistic in the presence of a true underlying effect. Val-MI provides largely unbiased estimates, with a slight pessimistic bias with increasing true effect size, number of covariates and decreasing sample size. In Val-MI, accuracy of the estimate is more strongly improved by

  4. Validity of Scientific Based Chemistry Android Module to Empower Science Process Skills (SPS) in Solubility Equilibrium

    NASA Astrophysics Data System (ADS)

    Antrakusuma, B.; Masykuri, M.; Ulfa, M.

    2018-04-01

    Evolution of Android technology can be applied to chemistry learning, one of the complex chemistry concept was solubility equilibrium. this concept required the science process skills (SPS). This study aims to: 1) Characteristic scientific based chemistry Android module to empowering SPS, and 2) Validity of the module based on content validity and feasibility test. This research uses a Research and Development approach (RnD). Research subjects were 135 s1tudents and three teachers at three high schools in Boyolali, Central of Java. Content validity of the module was tested by seven experts using Aiken’s V technique, and the module feasibility was tested to students and teachers in each school. Characteristics of chemistry module can be accessed using the Android device. The result of validation of the module contents got V = 0.89 (Valid), and the results of the feasibility test Obtained 81.63% (by the student) and 73.98% (by the teacher) indicates this module got good criteria.

  5. [Design and validation of scales to measure adolescent attitude toward eating and toward physical activity].

    PubMed

    Lima-Serrano, Marta; Lima-Rodríguez, Joaquín Salvador; Sáez-Bueno, Africa

    2012-01-01

    Different authors suggest that attitude is a mediator in behavior change, so it is a predictor of behavior practice. The main of this study was to design and to validate two scales for measure adolescent attitude toward healthy eating and adolescent attitude toward healthy physical activity. Scales were design based on a literature review. After, they were validated using an on-line Delphi Panel with eighteen experts, a pretest, and a pilot test with a sample of 188 high school students. Comprehensibility, content validity, adequacy, as well as the reliability (alpha of Cronbach test), and construct validity (exploratory factor analysis) of scales were tested. Scales validated by experts were considered appropriate in the pretest. In the pilot test, the ten-item Attitude to Eating Scale obtained α=0.72. The eight-item Attitude to Physical Activity Scale obtained α=0.86. They showed evidence of one-dimensional interpretation after factor analysis, a) all items got weights r>0.30 in first factor before rotations, b) the first factor explained a significant proportion of variance before rotations, and c) the total variance explained by the main factors extracted was greater than 50%. The Scales showed their reliability and validity. They could be employed to assess attitude to these priority intervention areas in Spanish adolescents, and to evaluate this intermediate result of health interventions and health programs.

  6. Measurement results obtained from air quality monitoring system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turzanski, P.K.; Beres, R.

    1995-12-31

    An automatic system of air pollution monitoring operates in Cracow since 1991. The organization, assembling and start-up of the network is a result of joint efforts of the US Environmental Protection Agency and the Cracow environmental protection service. At present the automatic monitoring network is operated by the Provincial Inspection of Environmental Protection. There are in total seven stationary stations situated in Cracow to measure air pollution. These stations are supported continuously by one semi-mobile (transportable) station. It allows to modify periodically the area under investigation and therefore the 3-dimensional picture of creation and distribution of air pollutants within Cracowmore » area could be more intelligible.« less

  7. Parafoveal preview benefit in reading is only obtained from the saccade goal.

    PubMed

    McDonald, Scott A

    2006-12-01

    Previous research has demonstrated that reading is less efficient when parafoveal visual information about upcoming words is invalid or unavailable; the benefit from a valid preview is realised as reduced reading times on the subsequently foveated word, and has been explained with reference to the allocation of attentional resources to parafoveal word(s). This paper presents eyetracking evidence that preview benefit is obtained only for words that are selected as the saccade target. Using a gaze-contingent display change paradigm (Rayner, K. (1975). The perceptual span and peripheral cues in reading. Cognitive Psychology, 7, 65-81), the position of the triggering boundary was set near the middle of the pretarget word. When a refixation saccade took the eye across the boundary in the pretarget word, there was no reliable effect of the validity of the target word preview. However, when the triggering boundary was positioned just after the pretarget word, a robust preview benefit was observed, replicating previous research. The current results complement findings from studies of basic visual function, suggesting that for the case of preview benefit in reading, attentional and oculomotor processes are obligatorily coupled.

  8. The National SAT Validity Study: Sharing Results from Recent College Success Research

    ERIC Educational Resources Information Center

    Shaw, Emily J.; McKenzie, Elizabeth

    2010-01-01

    [Slides] presented at the annual conference of the Southern Association for College Admission Counseling, April 2010. This presentation summarizes recent research from the national SAT Validity Study and includes information on the Admitted Class Evaluation Service (ACES) system and how ACES can help institutions conduct their own validity…

  9. Validation of Splicing Events in Transcriptome Sequencing Data

    PubMed Central

    Kaisers, Wolfgang; Ptok, Johannes; Schwender, Holger; Schaal, Heiner

    2017-01-01

    Genomic alignments of sequenced cellular messenger RNA contain gapped alignments which are interpreted as consequence of intron removal. The resulting gap-sites, genomic locations of alignment gaps, are landmarks representing potential splice-sites. As alignment algorithms report gap-sites with a considerable false discovery rate, validations are required. We describe two quality scores, gap quality score (gqs) and weighted gap information score (wgis), developed for validation of putative splicing events: While gqs solely relies on alignment data wgis additionally considers information from the genomic sequence. FASTQ files obtained from 54 human dermal fibroblast samples were aligned against the human genome (GRCh38) using TopHat and STAR aligner. Statistical properties of gap-sites validated by gqs and wgis were evaluated by their sequence similarity to known exon-intron borders. Within the 54 samples, TopHat identifies 1,000,380 and STAR reports 6,487,577 gap-sites. Due to the lack of strand information, however, the percentage of identified GT-AG gap-sites is rather low. While gap-sites from TopHat contain ≈89% GT-AG, gap-sites from STAR only contain ≈42% GT-AG dinucleotide pairs in merged data from 54 fibroblast samples. Validation with gqs yields 156,251 gap-sites from TopHat alignments and 166,294 from STAR alignments. Validation with wgis yields 770,327 gap-sites from TopHat alignments and 1,065,596 from STAR alignments. Both alignment algorithms, TopHat and STAR, report gap-sites with considerable false discovery rate, which can drastically be reduced by validation with gqs and wgis. PMID:28545234

  10. Development of a Conservative Model Validation Approach for Reliable Analysis

    DTIC Science & Technology

    2015-01-01

    CIE 2015 August 2-5, 2015, Boston, Massachusetts, USA [DRAFT] DETC2015-46982 DEVELOPMENT OF A CONSERVATIVE MODEL VALIDATION APPROACH FOR RELIABLE...obtain a conservative simulation model for reliable design even with limited experimental data. Very little research has taken into account the...3, the proposed conservative model validation is briefly compared to the conventional model validation approach. Section 4 describes how to account

  11. Validating presupposed versus focused text information.

    PubMed

    Singer, Murray; Solar, Kevin G; Spear, Jackie

    2017-04-01

    There is extensive evidence that readers continually validate discourse accuracy and congruence, but that they may also overlook conspicuous text contradictions. Validation may be thwarted when the inaccurate ideas are embedded sentence presuppositions. In four experiments, we examined readers' validation of presupposed ("given") versus new text information. Throughout, a critical concept, such as a truck versus a bus, was introduced early in a narrative. Later, a character stated or thought something about the truck, which therefore matched or mismatched its antecedent. Furthermore, truck was presented as either given or new information. Mismatch target reading times uniformly exceeded the matching ones by similar magnitudes for given and new concepts. We obtained this outcome using different grammatical constructions and with different antecedent-target distances. In Experiment 4, we examined only given critical ideas, but varied both their matching and the main verb's factivity (e.g., factive know vs. nonfactive think). The Match × Factivity interaction closely resembled that previously observed for new target information (Singer, 2006). Thus, readers can successfully validate given target information. Although contemporary theories tend to emphasize either deficient or successful validation, both types of theory can accommodate the discourse and reader variables that may regulate validation.

  12. Validation of the Cepheid GeneXpert for Detecting Ebola Virus in Semen.

    PubMed

    Loftis, Amy James; Quellie, Saturday; Chason, Kelly; Sumo, Emmanuel; Toukolon, Mason; Otieno, Yonnie; Ellerbrok, Heinzfried; Hobbs, Marcia M; Hoover, David; Dube, Karine; Wohl, David A; Fischer, William A

    2017-02-01

    Ebola virus (EBOV) RNA persistence in semen, reported sexual transmission, and sporadic clusters at the end of the 2013-2016 epidemic have prompted recommendations that male survivors refrain from unprotected sex unless their semen is confirmed to be EBOV free. However, there is no fully validated assay for EBOV detection in fluids other than blood. The Cepheid Xpert Ebola assay for EBOV RNA detection was validated for whole semen and blood using samples obtained from uninfected donors and spiked with inactivated EBOV. The validation procedure incorporated standards from Clinical and Laboratory Standards Institute and Good Clinical Laboratory Practices guidelines for evaluating molecular devices for use in infectious disease testing. The assay produced limits of detection of 1000 copies/mL in semen and 275 copies/mL in blood. Limits of detection for both semen and blood increased with longer intervals between collection and testing, with acceptable results obtained up to 72 hours after specimen collection. The Cepheid Xpert Ebola assay is accurate and precise for detecting EBOV in whole semen. A validated assay for EBOV RNA detection in semen informs the care of male survivors of Ebola, as well as recommendations for public health. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail: journals.permissions@oup.com.

  13. Development and validation of a two-dimensional fast-response flood estimation model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judi, David R; Mcpherson, Timothy N; Burian, Steven J

    2009-01-01

    A finite difference formulation of the shallow water equations using an upwind differencing method was developed maintaining computational efficiency and accuracy such that it can be used as a fast-response flood estimation tool. The model was validated using both laboratory controlled experiments and an actual dam breach. Through the laboratory experiments, the model was shown to give good estimations of depth and velocity when compared to the measured data, as well as when compared to a more complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. Themore » simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. The simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies show that a relatively numerical scheme used to solve the complete shallow water equations can be used to accurately estimate flood inundation. Future work will focus on further reducing the computation time needed to provide flood inundation estimates for fast-response analyses. This will be accomplished through the efficient use of multi-core, multi-processor computers coupled with an efficient domain-tracking algorithm, as well as an understanding of the impacts of grid resolution on model results.« less

  14. Validity for the simplified water displacement instrument to measure arm lymphedema as a result of breast cancer surgery.

    PubMed

    Sagen, Ase; Kåresen, Rolf; Skaane, Per; Risberg, May Arna

    2009-05-01

    To evaluate concurrent and construct validity for the Simplified Water Displacement Instrument (SWDI), an instrument for measuring arm volumes and arm lymphedema as a result of breast cancer surgery. Validity design. Hospital setting. Women (N=23; mean age, 64+/-11y) were examined 6 years after breast cancer surgery with axillary node dissection. Not applicable. The SWDI was included for measuring arm volumes to estimate arm lymphedema as a result of breast cancer surgery. A computed tomography (CT) scan was included to examine the cross-sectional areas (CSAs) in square millimeters for the subcutaneous tissue, for the muscle tissue, and for measuring tissue density in Hounsfield units. Magnetic resonance imaging (MRI) with T2-weighted sequences was included to show increased signal intensity in subcutaneous and muscle tissue areas. The affected arm volume measured by the SWDI was significantly correlated to the total CSA of the affected upper limb (R=.904) and also to the CSA of the subcutaneous tissue and muscles tissue (R=.867 and R=.725), respectively (P<.001). The CSA of the subcutaneous tissue for the upper limb was significantly larger compared with the control limb (11%). Tissue density measured in Hounsfield units did not correlate significantly with arm volume (P>.05). The affected arm volume was significantly larger (5%) than the control arm volume (P<.05). Five (22%) women had arm lymphedema defined as a 10% increase in the affected arm volume compared with the control arm volume, and an increased signal intensity was identified in all 5 women on MRI (T2-weighted, kappa=.777, P<.001). The SWDI showed high concurrent and construct validity as shown with significant correlations between the CSA (CT) of the subcutaneous and muscle areas of the affected limb and the affected arm volume (P>.001). There was a high agreement between those subjects who were diagnosed with arm lymphedema by using the SWDI and the increased signal intensity on MRI, with a kappa

  15. Validation of the Impulsive/Premeditated Aggression Scale in Mexican psychiatric patients.

    PubMed

    Romans, Laura; Fresán, Ana; Sentíes, Héctor; Sarmiento, Emmanuel; Berlanga, Carlos; Robles-García, Rebeca; Tovilla-Zarate, Carlos-Alfonso

    2015-07-01

    Aggression has been linked to several psychiatric disorders. None of the available instruments validated in Mexico is able to classify aggression as impulsive or premeditated. The Impulsive/Premeditated Aggression Scale (IPAS) is a self-report instrument designed to characterize aggressiveness as predominately impulsive or premeditated. The aim of the study was to determine the validity and reliability of the IPAS in a sample of Mexican psychiatric patients. A total of 163 patients diagnosed with affective, anxiety or psychotic disorder were included. A principal-component factor analysis was performed to obtain construct validity of the IPAS impulsive and premeditated aggression subscales; convergent validity as well as internal consistency of subscales were also determined. The rotated matrix accounted for 33.4% of the variance. Significant values were obtained for convergent validity and reliability of the IPAS subscales. The IPAS is an adequate instrument, which might be used to differentiate the type of aggressive behavior in Mexican psychiatric patients.

  16. Cross Validation Through Two-Dimensional Solution Surface for Cost-Sensitive SVM.

    PubMed

    Gu, Bin; Sheng, Victor S; Tay, Keng Yeow; Romano, Walter; Li, Shuo

    2017-06-01

    Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.

  17. Stratospheric Assimilation of Chemical Tracer Observations Using a Kalman Filter. Pt. 2; Chi-Square Validated Results and Analysis of Variance and Correlation Dynamics

    NASA Technical Reports Server (NTRS)

    Menard, Richard; Chang, Lang-Ping

    1998-01-01

    A Kalman filter system designed for the assimilation of limb-sounding observations of stratospheric chemical tracers, which has four tunable covariance parameters, was developed in Part I (Menard et al. 1998) The assimilation results of CH4 observations from the Cryogenic Limb Array Etalon Sounder instrument (CLAES) and the Halogen Observation Experiment instrument (HALOE) on board of the Upper Atmosphere Research Satellite are described in this paper. A robust (chi)(sup 2) criterion, which provides a statistical validation of the forecast and observational error covariances, was used to estimate the tunable variance parameters of the system. In particular, an estimate of the model error variance was obtained. The effect of model error on the forecast error variance became critical after only three days of assimilation of CLAES observations, although it took 14 days of forecast to double the initial error variance. We further found that the model error due to numerical discretization as arising in the standard Kalman filter algorithm, is comparable in size to the physical model error due to wind and transport modeling errors together. Separate assimilations of CLAES and HALOE observations were compared to validate the state estimate away from the observed locations. A wave-breaking event that took place several thousands of kilometers away from the HALOE observation locations was well captured by the Kalman filter due to highly anisotropic forecast error correlations. The forecast error correlation in the assimilation of the CLAES observations was found to have a structure similar to that in pure forecast mode except for smaller length scales. Finally, we have conducted an analysis of the variance and correlation dynamics to determine their relative importance in chemical tracer assimilation problems. Results show that the optimality of a tracer assimilation system depends, for the most part, on having flow-dependent error correlation rather than on evolving the

  18. Repeatability and reproducibility of measurements of the suburethral tape location obtained in pelvic floor ultrasound performed with a transvaginal probe

    PubMed Central

    Dresler, Maria Magdalena; Kociszewski, Jacek; Pędraszewski, Piotr; Trzeciak, Agnieszka; Surkont, Grzegorz

    2017-01-01

    Introduction Implants used to treat patients with urogynecological conditions are well visible in US examination. The position of the suburethral tape (sling) is determined in relation to the urethra or the pubic symphysis. Aim of the study The study was aimed at assessing the accuracy of measurements determining suburethral tape location obtained in pelvic US examination performed with a transvaginal probe. Material and methods The analysis covered the results of sonographic measurements obtained according to a standardized technique in women referred for urogynecological diagnostics. Data from a total of 68 patients were used to analyse the repeatability and reproducibility of results obtained on the same day. Results The intraclass correlation coefficient for the repeatability and reproducibility of the sonographic measurements of suburethral tape location obtained with a transvaginal probe ranged from 0.6665 to 0.9911. The analysis of the measurements confirmed their consistency to be excellent or good. Conclusions Excellent and good repeatability and reproducibility of the measurements of the suburethral tape location obtained in a pelvic ultrasound performed with a transvaginal probe confirm the test’s validity and usefulness for clinical and academic purposes. PMID:28856017

  19. Pooled results from five validation studies of dietary self-report instruments using recovery biomarkers for potassium and sodium intake

    USDA-ARS?s Scientific Manuscript database

    We have pooled data from five large validation studies of dietary self-report instruments that used recovery biomarkers as referents to assess food frequency questionnaires (FFQs) and 24-hour recalls. We reported on total potassium and sodium intakes, their densities, and their ratio. Results were...

  20. P185-M Protein Identification and Validation of Results in Workflows that Integrate over Various Instruments, Datasets, Search Engines

    PubMed Central

    Hufnagel, P.; Glandorf, J.; Körting, G.; Jabs, W.; Schweiger-Hufnagel, U.; Hahner, S.; Lubeck, M.; Suckau, D.

    2007-01-01

    Analysis of complex proteomes often results in long protein lists, but falls short in measuring the validity of identification and quantification results on a greater number of proteins. Biological and technical replicates are mandatory, as is the combination of the MS data from various workflows (gels, 1D-LC, 2D-LC), instruments (TOF/TOF, trap, qTOF or FTMS), and search engines. We describe a database-driven study that combines two workflows, two mass spectrometers, and four search engines with protein identification following a decoy database strategy. The sample was a tryptically digested lysate (10,000 cells) of a human colorectal cancer cell line. Data from two LC-MALDI-TOF/TOF runs and a 2D-LC-ESI-trap run using capillary and nano-LC columns were submitted to the proteomics software platform ProteinScape. The combined MALDI data and the ESI data were searched using Mascot (Matrix Science), Phenyx (GeneBio), ProteinSolver (Bruker and Protagen), and Sequest (Thermo) against a decoy database generated from IPI-human in order to obtain one protein list across all workflows and search engines at a defined maximum false-positive rate of 5%. ProteinScape combined the data to one LC-MALDI and one LC-ESI dataset. The initial separate searches from the two combined datasets generated eight independent peptide lists. These were compiled into an integrated protein list using the ProteinExtractor algorithm. An initial evaluation of the generated data led to the identification of approximately 1200 proteins. Result integration on a peptide level allowed discrimination of protein isoforms that would not have been possible with a mere combination of protein lists.

  1. Improvement of Simulation Method in Validation of Software of the Coordinate Measuring Systems

    NASA Astrophysics Data System (ADS)

    Nieciąg, Halina

    2015-10-01

    Software is used in order to accomplish various tasks at each stage of the functioning of modern measuring systems. Before metrological confirmation of measuring equipment, the system has to be validated. This paper discusses the method for conducting validation studies of a fragment of software to calculate the values of measurands. Due to the number and nature of the variables affecting the coordinate measurement results and the complex character and multi-dimensionality of measurands, the study used the Monte Carlo method of numerical simulation. The article presents an attempt of possible improvement of results obtained by classic Monte Carlo tools. The algorithm LHS (Latin Hypercube Sampling) was implemented as alternative to the simple sampling schema of classic algorithm.

  2. The Fruit & Vegetable Screener in the 2000 California Health Interview Survey: Validation Results

    Cancer.gov

    In this study, multiple 24-hour recalls in conjunction with a measurement error model were used to assess validity. The screeners used in the EATS included additional foods and reported portion sizes.

  3. A content validity study of signs, symptoms and diseases/health problems expressed in LIBRAS1

    PubMed Central

    Aragão, Jamilly da Silva; de França, Inacia Sátiro Xavier; Coura, Alexsandro Silva; de Sousa, Francisco Stélio; Batista, Joana D'arc Lyra; Magalhães, Isabella Medeiros de Oliveira

    2015-01-01

    Objectives: to validate the content of signs, symptoms and diseases/health problems expressed in LIBRAS for people with deafness Method: methodological development study, which involved 36 people with deafness and three LIBRAS specialists. The study was conducted in three stages: investigation of the signs, symptoms and diseases/health problems, referred to by people with deafness, reported in a questionnaire; video recordings of how people with deafness express, through LIBRA, the signs, symptoms and diseases/health problems; and validation of the contents of the recordings of the expressions by LIBRAS specialists. Data were processed in a spreadsheet and analyzed using univariate tables, with absolute frequencies and percentages. The validation results were analyzed using the Content Validity Index (CVI). Results: 33 expressions in LIBRAS, of signs, symptoms and diseases/health problems were evaluated, and 28 expressions obtained a satisfactory CVI (1.00). Conclusions: the signs, symptoms and diseases/health problems expressed in LIBRAS presented validity, in the study region, for health professionals, especially nurses, for use in the clinical anamnesis of the nursing consultation for people with deafness. PMID:26625991

  4. Groundwater Model Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmed E. Hassan

    2006-01-24

    Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process ofmore » stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of

  5. Experimental Results Obtained with Air Liquide Cold Compression System: CERN LHC and SNS Projects

    NASA Astrophysics Data System (ADS)

    Delcayre, F.; Courty, J.-C.; Hamber, F.; Hilbert, B.; Monneret, E.; Toia, J.-L.

    2006-04-01

    Large scale collider facilities will make intensive use of superconducting magnets, operating below 2.0 K. This dictates high-capacity refrigeration systems operating below 2.0 K. These systems, making use of cryogenic centrifugal compressors in a series arrangement with room temperature screw compressors will be coupled to a refrigerator, providing a certain power at 4.5 K. A first Air Liquide Cold Compression System (CCS) unit was built and delivered to CERN in 2001. Installed at the beginning of 2002, it was commissioned and tested successfully during year 2002. A series of four sets of identical CCS were then tested in 2004. Another set of four cryogenic centrifugal compressors (CCC) has been delivered to Thomas Jefferson National Accelerator Facility (JLAB) for the Spallation Neutron Source (SNS) in 2002. These compressors were tested and commissioned from December 2004 to July 2005. The experimental results obtained with these systems will be presented and discussed: the characteristics of the CCC will be detailed. The principles of control for the CCC in series will be detailed.

  6. Examining the validity of self-reports on scales measuring students' strategic processing.

    PubMed

    Samuelstuen, Marit S; Bråten, Ivar

    2007-06-01

    Self-report inventories trying to measure strategic processing at a global level have been much used in both basic and applied research. However, the validity of global strategy scores is open to question because such inventories assess strategy perceptions outside the context of specific task performance. The primary aim was to examine the criterion-related and construct validity of the global strategy data obtained with the Cross-Curricular Competencies (CCC) scale. Additionally, we wanted to compare the validity of these data with the validity of data obtained with a task-specific self-report inventory focusing on the same types of strategies. The sample included 269 10th-grade students from 12 different junior high schools. Global strategy use as assessed with the CCC was compared with task-specific strategy use reported in three different reading situations. Moreover, relationships between scores on the CCC and scores on measures of text comprehension were examined and compared with relationships between scores on the task-specific strategy measure and the same comprehension measures. The comparison between the CCC strategy scores and the task-specific strategy scores suggested only modest criterion-related validity for the data obtained with the global strategy inventory. The CCC strategy scores were also not related to the text comprehension measures, indicating poor construct validity. In contrast, the task-specific strategy scores were positively related to the comprehension measures, indicating good construct validity. Attempts to measure strategic processing at a global level seem to have limited validity and utility.

  7. Urdu translation of the Hamilton Rating Scale for Depression: Results of a validation study.

    PubMed

    Hashmi, Ali M; Naz, Shahana; Asif, Aftab; Khawaja, Imran S

    2016-01-01

    To develop a standardized validated version of the Hamilton Rating Scale for Depression (HAM-D) in Urdu. After translation of the HAM-D into the Urdu language following standard guidelines, the final Urdu version (HAM-D-U) was administered to 160 depressed outpatients. Inter-item correlation was assessed by calculating Cronbach alpha. Correlation between HAM-D-U scores at baseline and after a 2-week interval was evaluated for test-retest reliability. Moreover, scores of two clinicians on HAM-D-U were compared for inter-rater reliability. For establishing concurrent validity, scores of HAM-D-U and BDI-U were compared by using Spearman correlation coefficient. The study was conducted at Mayo Hospital, Lahore, from May to December 2014. The Cronbach alpha for HAM-D-U was 0.71. Composite scores for HAM-D-U at baseline and after a 2-week interval were also highly correlated with each other (Spearman correlation coefficient 0.83, p-value < 0.01) indicating good test-retest reliability. Composite scores for HAM-D-U and BDI-U were positively correlated with each other (Spearman correlation coefficient 0.85, p < 0.01) indicating good concurrent validity. Scores of two clinicians for HAM-D-U were also positively correlated (Spearman correlation coefficient 0.82, p-value < 0.01) indicated good inter-rater reliability. The HAM-D-U is a valid and reliable instrument for the assessment of Depression. It shows good inter-rater and test-retest reliability. The HAM-D-U can be a tool either for clinical management or research.

  8. Validation of streamflow measurements made with acoustic doppler current profilers

    USGS Publications Warehouse

    Oberg, K.; Mueller, D.S.

    2007-01-01

    The U.S. Geological Survey and other international agencies have collaborated to conduct laboratory and field validations of acoustic Doppler current profiler (ADCP) measurements of streamflow. Laboratory validations made in a large towing basin show that the mean differences between tow cart velocity and ADCP bottom-track and water-track velocities were -0.51 and -1.10%, respectively. Field validations of commercially available ADCPs were conducted by comparing streamflow measurements made with ADCPs to reference streamflow measurements obtained from concurrent mechanical current-meter measurements, stable rating curves, salt-dilution measurements, or acoustic velocity meters. Data from 1,032 transects, comprising 100 discharge measurements, were analyzed from 22 sites in the United States, Canada, Sweden, and The Netherlands. Results of these analyses show that broadband ADCP streamflow measurements are unbiased when compared to the reference discharges regardless of the water mode used for making the measurement. Measurement duration is more important than the number of transects for reducing the uncertainty of the ADCP streamflow measurement. ?? 2007 ASCE.

  9. Validation of UARS MLS Ozone Measurements

    NASA Technical Reports Server (NTRS)

    Froidevaux, L.; Read, W. G.; Lungu, T. A; Cofield, R. E.; Fishbein, E. F.; Flower, D. A.; Jarnot, R. f.; Ridenoure, B. P.; Shippony, Z.; Waters, J. W.; hide

    1994-01-01

    This paper describes the validation of ozone data from the Upper Atmosphere Research Satellite (UARS) Microwave Limb Sounder (MLS). The MLS ozone retrievals are obtained from the calibrated microwave radiances (emission spectra) in two separate bands, at frequencies near 205 and 183 GHz.

  10. Toward DNA-based facial composites: preliminary results and validation.

    PubMed

    Claes, Peter; Hill, Harold; Shriver, Mark D

    2014-11-01

    The potential of constructing useful DNA-based facial composites is forensically of great interest. Given the significant identity information coded in the human face these predictions could help investigations out of an impasse. Although, there is substantial evidence that much of the total variation in facial features is genetically mediated, the discovery of which genes and gene variants underlie normal facial variation has been hampered primarily by the multipartite nature of facial variation. Traditionally, such physical complexity is simplified by simple scalar measurements defined a priori, such as nose or mouth width or alternatively using dimensionality reduction techniques such as principal component analysis where each principal coordinate is then treated as a scalar trait. However, as shown in previous and related work, a more impartial and systematic approach to modeling facial morphology is available and can facilitate both the gene discovery steps, as we recently showed, and DNA-based facial composite construction, as we show here. We first use genomic ancestry and sex to create a base-face, which is simply an average sex and ancestry matched face. Subsequently, the effects of 24 individual SNPs that have been shown to have significant effects on facial variation are overlaid on the base-face forming the predicted-face in a process akin to a photomontage or image blending. We next evaluate the accuracy of predicted faces using cross-validation. Physical accuracy of the facial predictions either locally in particular parts of the face or in terms of overall similarity is mainly determined by sex and genomic ancestry. The SNP-effects maintain the physical accuracy while significantly increasing the distinctiveness of the facial predictions, which would be expected to reduce false positives in perceptual identification tasks. To the best of our knowledge this is the first effort at generating facial composites from DNA and the results are preliminary

  11. Pump CFD code validation tests

    NASA Technical Reports Server (NTRS)

    Brozowski, L. A.

    1993-01-01

    Pump CFD code validation tests were accomplished by obtaining nonintrusive flow characteristic data at key locations in generic current liquid rocket engine turbopump configurations. Data were obtained with a laser two-focus (L2F) velocimeter at scaled design flow. Three components were surveyed: a 1970's-designed impeller, a 1990's-designed impeller, and a four-bladed unshrouded inducer. Two-dimensional velocities were measured upstream and downstream of the two impellers. Three-dimensional velocities were measured upstream, downstream, and within the blade row of the unshrouded inducer.

  12. A diagnostic technique used to obtain cross range radiation centers from antenna patterns

    NASA Technical Reports Server (NTRS)

    Lee, T. H.; Burnside, W. D.

    1988-01-01

    A diagnostic technique to obtain cross range radiation centers based on antenna radiation patterns is presented. This method is similar to the synthetic aperture processing of scattered fields in the radar application. Coherent processing of the radiated fields is used to determine the various radiation centers associated with the far-zone pattern of an antenna for a given radiation direction. This technique can be used to identify an unexpected radiation center that creates an undesired effect in a pattern; on the other hand, it can improve a numerical simulation of the pattern by identifying other significant mechanisms. Cross range results for two 8' reflector antennas are presented to illustrate as well as validate that technique.

  13. NMR permeability estimators in 'chalk' carbonate rocks obtained under different relaxation times and MICP size scalings

    NASA Astrophysics Data System (ADS)

    Rios, Edmilson Helton; Figueiredo, Irineu; Moss, Adam Keith; Pritchard, Timothy Neil; Glassborow, Brent Anthony; Guedes Domingues, Ana Beatriz; Bagueira de Vasconcellos Azeredo, Rodrigo

    2016-07-01

    The effect of the selection of different nuclear magnetic resonance (NMR) relaxation times for permeability estimation is investigated for a set of fully brine-saturated rocks acquired from Cretaceous carbonate reservoirs in the North Sea and Middle East. Estimators that are obtained from the relaxation times based on the Pythagorean means are compared with estimators that are obtained from the relaxation times based on the concept of a cumulative saturation cut-off. Select portions of the longitudinal (T1) and transverse (T2) relaxation-time distributions are systematically evaluated by applying various cut-offs, analogous to the Winland-Pittman approach for mercury injection capillary pressure (MICP) curves. Finally, different approaches to matching the NMR and MICP distributions using different mean-based scaling factors are validated based on the performance of the related size-scaled estimators. The good results that were obtained demonstrate possible alternatives to the commonly adopted logarithmic mean estimator and reinforce the importance of NMR-MICP integration to improving carbonate permeability estimates.

  14. Validating Inertial Confinement Fusion (ICF) predictive capability using perturbed capsules

    NASA Astrophysics Data System (ADS)

    Schmitt, Mark; Magelssen, Glenn; Tregillis, Ian; Hsu, Scott; Bradley, Paul; Dodd, Evan; Cobble, James; Flippo, Kirk; Offerman, Dustin; Obrey, Kimberly; Wang, Yi-Ming; Watt, Robert; Wilke, Mark; Wysocki, Frederick; Batha, Steven

    2009-11-01

    Achieving ignition on NIF is a monumental step on the path toward utilizing fusion as a controlled energy source. Obtaining robust ignition requires accurate ICF models to predict the degradation of ignition caused by heterogeneities in capsule construction and irradiation. LANL has embarked on a project to induce controlled defects in capsules to validate our ability to predict their effects on fusion burn. These efforts include the validation of feature-driven hydrodynamics and mix in a convergent geometry. This capability is needed to determine the performance of capsules imploded under less-than-optimum conditions on future IFE facilities. LANL's recently initiated Defect Implosion Experiments (DIME) conducted at Rochester's Omega facility are providing input for these efforts. Recent simulation and experimental results will be shown.

  15. Fragmentation Point Detection of JPEG Images at DHT Using Validator

    NASA Astrophysics Data System (ADS)

    Mohamad, Kamaruddin Malik; Deris, Mustafa Mat

    File carving is an important, practical technique for data recovery in digital forensics investigation and is particularly useful when filesystem metadata is unavailable or damaged. The research on reassembly of JPEG files with RST markers, fragmented within the scan area have been done before. However, fragmentation within Define Huffman Table (DHT) segment is yet to be resolved. This paper analyzes the fragmentation within the DHT area and list out all the fragmentation possibilities. Two main contributions are made in this paper. Firstly, three fragmentation points within DHT area are listed. Secondly, few novel validators are proposed to detect these fragmentations. The result obtained from tests done on manually fragmented JPEG files, showed that all three fragmentation points within DHT are successfully detected using validators.

  16. SCIAMACHY validation by aircraft remote sensing: design, execution, and first measurement results of the SCIA-VALUE mission

    NASA Astrophysics Data System (ADS)

    Fix, A.; Ehret, G.; Flentje, H.; Poberaj, G.; Gottwald, M.; Finkenzeller, H.; Bremer, H.; Bruns, M.; Burrows, J. P.; Kleinböhl, A.; Küllmann, H.; Kuttippurath, J.; Richter, A.; Wang, P.; Heue, K.-P.; Platt, U.; Pundt, I.; Wagner, T.

    2005-05-01

    For the first time three different remote sensing instruments - a sub-millimeter radiometer, a differential optical absorption spectrometer in the UV-visible spectral range, and a lidar - were deployed aboard DLR's meteorological research aircraft Falcon 20 to validate a large number of SCIAMACHY level 2 and off-line data products such as O3, NO2, N2O, BrO, OClO, H2O, aerosols, and clouds. Within two validation campaigns of the SCIA-VALUE mission (SCIAMACHY VALidation and Utilization Experiment) extended latitudinal cross-sections stretching from polar regions to the tropics as well as longitudinal cross sections at polar latitudes at about 70° N and the equator were generated. This contribution gives an overview over the campaigns performed and reports on the observation strategy for achieving the validation goals. We also emphasize the synergetic use of the novel set of aircraft instrumentation and the usefulness of this innovative suite of remote sensing instruments for satellite validation.

  17. Cray Research, Inc. Cray 1-s, Cray FORTRAN translator CFT) version 1. 11 Bugfix 1. Validation summary report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1983-09-09

    This Validation Summary Report (VSR) for the Cray Research, Inc., CRAY FORTRAN Translator (CFT) Version 1.11 Bugfix 1 running under the CRAY Operating System (COS) Version 1.12 provides a consolidated summary of the results obtained from the validation of the subject compiler against the 1978 FORTRAN Standard (X3.9-1978/FIPS PUB 69). The compiler was validated against the Full Level FORTRAN level of FIPS PUB 69. The VSR is made up of several sections showing all the discrepancies found -if any. These include an overview of the validation which lists all categories of discrepancies together with the tests which failed.

  18. Accelerating cross-validation with total variation and its application to super-resolution imaging

    NASA Astrophysics Data System (ADS)

    Obuchi, Tomoyuki; Ikeda, Shiro; Akiyama, Kazunori; Kabashima, Yoshiyuki

    2017-12-01

    We develop an approximation formula for the cross-validation error (CVE) of a sparse linear regression penalized by ℓ_1-norm and total variation terms, which is based on a perturbative expansion utilizing the largeness of both the data dimensionality and the model. The developed formula allows us to reduce the necessary computational cost of the CVE evaluation significantly. The practicality of the formula is tested through application to simulated black-hole image reconstruction on the event-horizon scale with super resolution. The results demonstrate that our approximation reproduces the CVE values obtained via literally conducted cross-validation with reasonably good precision.

  19. Coupled CFD and Particle Vortex Transport Method: Wing Performance and Wake Validations

    DTIC Science & Technology

    2008-06-26

    the PVTM analysis. The results obtained using the coupled RANS/PVTM analysis compare well with experimental data , in particular the pressure...searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments...is validated against wind tunnel test data . Comparisons with measured pressure distribution, loadings, and vortex parameters, and the corresponding

  20. Development and validation of an ionic chromatography method for the determination of nitrate, nitrite and chloride in meat.

    PubMed

    Lopez-Moreno, Cristina; Perez, Isabel Viera; Urbano, Ana M

    2016-03-01

    The purpose of this study is to develop the validation of a method for the analysis of certain preservatives in meat and to obtain a suitable Certified Reference Material (CRM) to achieve this task. The preservatives studied were NO3(-), NO2(-) and Cl(-) as they serve as important antimicrobial agents in meat to inhibit the growth of bacteria spoilage. The meat samples were prepared using a treatment that allowed the production of a known CRM concentration that is highly homogeneous and stable in time. The matrix effects were also studied to evaluate the influence on the analytical signal for the ions of interest, showing that the matrix influence does not affect the final result. An assessment of the signal variation in time was carried out for the ions. In this regard, although the chloride and nitrate signal remained stable for the duration of the study, the nitrite signal decreased appreciably with time. A mathematical treatment of the data gave a stable nitrite signal, obtaining a method suitable for the validation of these anions in meat. A statistical study was needed for the validation of the method, where the precision, accuracy, uncertainty and other mathematical parameters were evaluated obtaining satisfactory results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Pinaverium Bromide: Development and Validation of Spectrophotometric Methods for Assay and Dissolution Studies.

    PubMed

    Martins, Danielly da Fonte Carvalho; Florindo, Lorena Coimbra; Machado, Anna Karolina Mouzer da Silva; Todeschini, Vítor; Sangoi, Maximiliano da Silva

    2017-11-01

    This study presents the development and validation of UV spectrophotometric methods for the determination of pinaverium bromide (PB) in tablet assay and dissolution studies. The methods were satisfactorily validated according to International Conference on Harmonization guidelines. The response was linear (r2 > 0.99) in the concentration ranges of 2-14 μg/mL at 213 nm and 10-70 μg/mL at 243 nm. The LOD and LOQ were 0.39 and 1.31 μg/mL, respectively, at 213 nm. For the 243 nm method, the LOD and LOQ were 2.93 and 9.77 μg/mL, respectively. Precision was evaluated by RSD, and the obtained results were lower than 2%. Adequate accuracy was also obtained. The methods proved to be robust using a full factorial design evaluation. For PB dissolution studies, the best conditions were achieved using a United States Pharmacopeia Dissolution Apparatus 2 (paddle) at 50 rpm and with 900 mL 0.1 M hydrochloric acid as the dissolution medium, presenting satisfactory results during the validation tests. In addition, the kinetic parameters of drug release were investigated using model-dependent methods, and the dissolution profiles were best described by the first-order model. Therefore, the proposed methods were successfully applied for the assay and dissolution analysis of PB in commercial tablets.

  2. Simbol-X Hard X-ray Focusing Mirrors: Results Obtained During the Phase A Study

    NASA Astrophysics Data System (ADS)

    Tagliaferri, G.; Basso, S.; Borghi, G.; Burkert, W.; Citterio, O.; Civitani, M.; Conconi, P.; Cotroneo, V.; Freyberg, M.; Garoli, D.; Gorenstein, P.; Hartner, G.; Mattarello, V.; Orlandi, A.; Pareschi, G.; Romaine, S.; Spiga, D.; Valsecchi, G.; Vernani, D.

    2009-05-01

    Simbol-X will push grazing incidence imaging up to 80 keV, providing a strong improvement both in sensitivity and angular resolution compared to all instruments that have operated so far above 10 keV. The superb hard X-ray imaging capability will be guaranteed by a mirror module of 100 electroformed Nickel shells with a multilayer reflecting coating. Here we will describe the technogical development and solutions adopted for the fabrication of the mirror module, that must guarantee an Half Energy Width (HEW) better than 20 arcsec from 0.5 up to 30 keV and a goal of 40 arcsec at 60 keV. During the phase A, terminated at the end of 2008, we have developed three engineering models with two, two and three shells, respectively. The most critical aspects in the development of the Simbol-X mirrors are i) the production of the 100 mandrels with very good surface quality within the timeline of the mission, ii) the replication of shells that must be very thin (a factor of 2 thinner than those of XMM-Newton) and still have very good image quality up to 80 keV, iii) the development of an integration process that allows us to integrate these very thin mirrors maintaining their intrinsic good image quality. The Phase A study has shown that we can fabricate the mandrels with the needed quality and that we have developed a valid integration process. The shells that we have produced so far have a quite good image quality, e.g. HEW <~30 arcsec at 30 keV, and effective area. However, we still need to make some improvements to reach the requirements. We will briefly present these results and discuss the possible improvements that we will investigate during phase B.

  3. Quantitative impedance measurements for eddy current model validation

    NASA Astrophysics Data System (ADS)

    Khan, T. A.; Nakagawa, N.

    2000-05-01

    This paper reports on a series of laboratory-based impedance measurement data, collected by the use of a quantitatively accurate, mechanically controlled measurement station. The purpose of the measurement is to validate a BEM-based eddy current model against experiment. We have therefore selected two "validation probes," which are both split-D differential probes. Their internal structures and dimensions are extracted from x-ray CT scan data, and thus known within the measurement tolerance. A series of measurements was carried out, using the validation probes and two Ti-6Al-4V block specimens, one containing two 1-mm long fatigue cracks, and the other containing six EDM notches of a range of sizes. Motor-controlled XY scanner performed raster scans over the cracks, with the probe riding on the surface with a spring-loaded mechanism to maintain the lift off. Both an impedance analyzer and a commercial EC instrument were used in the measurement. The probes were driven in both differential and single-coil modes for the specific purpose of model validation. The differential measurements were done exclusively by the eddyscope, while the single-coil data were taken with both the impedance analyzer and the eddyscope. From the single-coil measurements, we obtained the transfer function to translate the voltage output of the eddyscope into impedance values, and then used it to translate the differential measurement data into impedance results. The presentation will highlight the schematics of the measurement procedure, a representative of raw data, explanation of the post data-processing procedure, and then a series of resulting 2D flaw impedance results. A noise estimation will be given also, in order to quantify the accuracy of these measurements, and to be used in probability-of-detection estimation.—This work was supported by the NSF Industry/University Cooperative Research Program.

  4. A methodology for collecting valid software engineering data

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Weiss, David M.

    1983-01-01

    An effective data collection method for evaluating software development methodologies and for studying the software development process is described. The method uses goal-directed data collection to evaluate methodologies with respect to the claims made for them. Such claims are used as a basis for defining the goals of the data collection, establishing a list of questions of interest to be answered by data analysis, defining a set of data categorization schemes, and designing a data collection form. The data to be collected are based on the changes made to the software during development, and are obtained when the changes are made. To insure accuracy of the data, validation is performed concurrently with software development and data collection. Validation is based on interviews with those people supplying the data. Results from using the methodology show that data validation is a necessary part of change data collection. Without it, as much as 50% of the data may be erroneous. Feasibility of the data collection methodology was demonstrated by applying it to five different projects in two different environments. The application showed that the methodology was both feasible and useful.

  5. Obtaining a Dry Extract from the Mikania laevigata Leaves with Potential for Antiulcer Activity

    PubMed Central

    Pinto, Mariana Viana; Oliveira, Ezequiane Machado; Martins, Jose Luiz Rodrigues; de Paula, Jose Realino; Costa, Elson Alves; da Conceição, Edemilson Cardoso; Bara, Maria Teresa Freitas

    2017-01-01

    Background: Mikania laevigata leaves are commonly used in Brazil as a medicinal plant. Objective: To obtain hydroalcoholic dried extract by nebulization and evaluate its antiulcerogenic potential. Materials and Methods: Plant material and hydroalcoholic extract were processed and analyzed for their physicochemical characteristics. A method using HPLC was validated to quantify coumarin and o-coumaric acid. Hydroalcoholic extract was spray dried and the powder obtained was characterized in terms of its physicochemical parameters and potential for antiulcerogenic activity. Results: The analytical method proved to be selective, linear, precise, accurate, sensitive, and robust. M. laevigata spray dried extract was obtained using colloidal silicon dioxide as adjuvant and was shown to possess 1.83 ± 0.004% coumarin and 0.80 ± 0.012% o-coumaric acid. It showed significant antiulcer activity in a model of an indomethacin-induced gastric lesion in mice and also produced a gastroprotective effect. Conclusion: This dried extract from M. laevigata could be a promising intermediate phytopharmaceutical product. SUMMARY Research and development of standardized dried extract of Mikania laevigata leaves obtained through spray drying and the production process was monitored by the chemical profile, physicochemical properties and potential for anti-ulcerogenic activity. Abbreviations used: DE: M. laevigata spray dried extract, HE: hydroalcoholic extract. PMID:28216886

  6. Ares I-X Range Safety Simulation Verification and Analysis Independent Validation and Verification

    NASA Technical Reports Server (NTRS)

    Merry, Carl M.; Tarpley, Ashley F.; Craig, A. Scott; Tartabini, Paul V.; Brewer, Joan D.; Davis, Jerel G.; Dulski, Matthew B.; Gimenez, Adrian; Barron, M. Kyle

    2011-01-01

    NASA s Ares I-X vehicle launched on a suborbital test flight from the Eastern Range in Florida on October 28, 2009. To obtain approval for launch, a range safety final flight data package was generated to meet the data requirements defined in the Air Force Space Command Manual 91-710 Volume 2. The delivery included products such as a nominal trajectory, trajectory envelopes, stage disposal data and footprints, and a malfunction turn analysis. The Air Force s 45th Space Wing uses these products to ensure public and launch area safety. Due to the criticality of these data, an independent validation and verification effort was undertaken to ensure data quality and adherence to requirements. As a result, the product package was delivered with the confidence that independent organizations using separate simulation software generated data to meet the range requirements and yielded consistent results. This document captures Ares I-X final flight data package verification and validation analysis, including the methodology used to validate and verify simulation inputs, execution, and results and presents lessons learned during the process

  7. The Construct of the Learning Organization: Dimensions, Measurement, and Validation

    ERIC Educational Resources Information Center

    Yang, Baiyin; Watkins, Karen E.; Marsick, Victoria J.

    2004-01-01

    This research describes efforts to develop and validate a multidimensional measure of the learning organization. An instrument was developed based on a critical review of both the conceptualization and practice of this construct. Supporting validity evidence for the instrument was obtained from several sources, including best model-data fit among…

  8. The Elegance of Disordered Granular Packings: A Validation of Edwards' Hypothesis

    NASA Technical Reports Server (NTRS)

    Metzger, Philip T.; Donahue, Carly M.

    2004-01-01

    We have found a way to analyze Edwards' density of states for static granular packings in the special case of round, rigid, frictionless grains assuming constant coordination number. It obtains the most entropic density of single grain states, which predicts several observables including the distribution of contact forces. We compare these results against empirical data obtained in dynamic simulations of granular packings. The agreement between theory and the empirics is quite good, helping validate the use of statistical mechanics methods in granular physics. The differences between theory and empirics are mainly due to the variable coordination number, and when the empirical data are sorted by that number we obtain several insights that suggest an underlying elegance in the density of states

  9. Paternity tests in Mexico: Results obtained in 3005 cases.

    PubMed

    García-Aceves, M E; Romero Rentería, O; Díaz-Navarro, X X; Rangel-Villalobos, H

    2018-04-01

    National and international reports regarding the paternity testing activity scarcely include information from Mexico and other Latin American countries. Therefore, we report different results from the analysis of 3005 paternity cases analyzed during a period of five years in a Mexican paternity testing laboratory. Motherless tests were the most frequent (77.27%), followed by trio cases (20.70%); the remaining 2.04% included different cases of kinship reconstruction. The paternity exclusion rate was 29.58%, higher but into the range reported by the American Association of Blood Banks (average 24.12%). We detected 65 mutations, most of them involving one-step (93.8% and the remaining were two-step mutations (6.2%) thus, we were able to estimate the paternal mutation rate for 17 different STR loci: 0.0018 (95% CI 0.0005-0.0047). Five triallelic patterns and 12 suspected null alleles were detected during this period; however, re-amplification of these samples with a different Human Identification (HID) kit confirmed the homozygous genotypes, which suggests that most of these exclusions actually are one-step mutations. HID kits with ≥20 STRs detected more exclusions, diminishing the rate of inconclusive results with isolated exclusions (<3 loci), and leading to higher paternity indexes (PI). However, the Powerplex 21 kit (20 STRs) and Powerplex Fusion kit (22 STRs) offered similar PI (p = 0.379) and average number of exclusions (PE) (p = 0.339) when a daughter was involved in motherless tests. In brief, besides to report forensic parameters from paternity tests in Mexico, results describe improvements to solve motherless paternity tests using HID kits with ≥20 STRs instead of one including 15 STRs. Copyright © 2018 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  10. Feasibility and accuracy of molecular testing in specimens obtained with small biopsy forceps: comparison with the results of surgical specimens.

    PubMed

    Oki, Masahide; Yatabe, Yasushi; Saka, Hideo; Kitagawa, Chiyoe; Kogure, Yoshihito; Ichihara, Shu; Moritani, Suzuko

    2015-01-01

    During bronchoscopy, small biopsy forceps are increasingly used for the diagnosis of peripheral pulmonary lesions. However, it is unclear whether the formalin-fixed paraffin-embedded specimens sampled with the small biopsy forceps are suitable for the determination of genotypes which become indispensable for the management decision regarding patients with non-small cell lung cancer. The aim of this study was to evaluate the feasibility and accuracy of molecular testing in the specimens obtained with 1.5-mm small biopsy forceps. We examined specimens in 91 patients, who were enrolled in our previous 3 studies on the usefulness of thin bronchoscopes and given a diagnosis of non-small cell lung cancer by bronchoscopy with the 1.5-mm biopsy forceps, and then underwent surgical resection. An experienced pathologist examined paraffin-embedded specimens obtained by bronchoscopic biopsy or surgical resection in a blind fashion on epidermal growth factor receptor (EGFR) mutations, anaplastic lymphoma kinase (ALK) rearrangements and KRAS mutations. Twenty-five (27%), 2 (2%) and 5 (5%) patients had an EGFR mutation, ALK rearrangement and KRAS mutation, respectively, based on the results in surgical specimens. EGFR, ALK and KRAS testing with bronchoscopic specimens was feasible in 82 (90%), 86 (95%) and 83 (91%) patients, respectively. If molecular testing was feasible, the accuracy of EGFR, ALK and KRAS testing with bronchoscopic specimens for the results with surgical specimens was 98, 100 and 98%, respectively. The results of molecular testing in the formalin-fixed paraffin-embedded specimens obtained with the small forceps, in which the genotype could be evaluated, correlated well with those in surgically resected specimens.

  11. AMSR Validation Program

    NASA Astrophysics Data System (ADS)

    Lobl, E. S.

    2003-12-01

    AMSR and AMSR-E are passive microwave radiometers built by NASDA in Japan. AMSR flies on ADEOS II, launched December 14 2001, and AMSR-E flies on NASA's Aqua satellite, launched May 4 2001. The Science teams in both countries have developed algorithms to retrieve different atmospheric parameters from the data obtained by these radiometers. The US Science team has developed a Validation plan that involved several campaigns. In fact most of these campaign have taken place this year: 2003, nicknamed the "Golden Year" for AMSR Validation. The first campaign started in January 2003 with the Extra-tropical precipitation campaign, followed by IOP3 for Cold Lands Processes Experiment (CLPX) in Colorado. After the change out of some of the instruments, the Validation program continued with the Arctic Sea Ice campaign based in Alaska, and followed by CLPX IOP 4, back in Colorado. Soil Moisture EXperiment 03 (SMEX03) started in late June in Alabama and Georgia, and then completed in Oklahoma mid-July. The last campaign in this series is AMSR Antarctic Sea Ice (AASI)/SMEX in Brazil. The major goals of each campaign, and very preliminary data will be shown. Most of these campaigns were in collaboration with the Japanese AMSR scientists.

  12. The SCALE Verified, Archived Library of Inputs and Data - VALID

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, William BJ J; Rearden, Bradley T

    The Verified, Archived Library of Inputs and Data (VALID) at ORNL contains high quality, independently reviewed models and results that improve confidence in analysis. VALID is developed and maintained according to a procedure of the SCALE quality assurance (QA) plan. This paper reviews the origins of the procedure and its intended purpose, the philosophy of the procedure, some highlights of its implementation, and the future of the procedure and associated VALID library. The original focus of the procedure was the generation of high-quality models that could be archived at ORNL and applied to many studies. The review process associated withmore » model generation minimized the chances of errors in these archived models. Subsequently, the scope of the library and procedure was expanded to provide high quality, reviewed sensitivity data files for deployment through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). Sensitivity data files for approximately 400 such models are currently available. The VALID procedure and library continue fulfilling these multiple roles. The VALID procedure is based on the quality assurance principles of ISO 9001 and nuclear safety analysis. Some of these key concepts include: independent generation and review of information, generation and review by qualified individuals, use of appropriate references for design data and documentation, and retrievability of the models, results, and documentation associated with entries in the library. Some highlights of the detailed procedure are discussed to provide background on its implementation and to indicate limitations of data extracted from VALID for use by the broader community. Specifically, external users of data generated within VALID must take responsibility for ensuring that the files are used within the QA framework of their organization and that use is appropriate. The future plans for the VALID library include expansion to include additional

  13. First results obtained within the European 'LAMA' programme (Large Active Mirrors in Aluminium)

    NASA Astrophysics Data System (ADS)

    Rozelot, J.-P.

    1993-11-01

    To investigate the feasibility of large size aluminum mirrors, studies have been undertaken in cooperation with European Southern Observatory (ESO), in the framework of a European program. The first phase, which is just now ended, addressed the following items: (1) tests to select the best aluminum alloy, (2) aluminum welding, homogeneity and stability, (3) aluminum high-precision machining, (4) nickel coating, (5) polishing of the nickel layer, (6) active optics. Furthermore, tests have been conducted to demonstrate that the quality of the mirrors is not altered at various temperatures and after a large number of aluminizing and cleaning cycles (corresponding to about 50 years' life). The mirror shape (whose specifications are fully compliant with those of the Very Large Telescope (VLT), as the program is conducted in cooperation with ESO) was computed under several causes of deformations: evidencing gravity as the predominant effect, and very low distortions as the high thermal conductivity limits the thermal transverse gradient to 0.025 C. Results show that it is quite possible to obtain high optical quality mirrors, mainly due to recent progress both in metallurgical processes (high precision machining -7 microns rms-) and active optics, that permit to correct residual aberrations of the surface. Such an alternative to classical glass mirrors will presently stand as a safe, economical solution that saves manufacturing time, for monolithic or segmented mirrors for innovative telescopes (e.g., lunar interferometric network).

  14. Validation Results for LEWICE 2.0

    NASA Technical Reports Server (NTRS)

    Wright, William B.; Rutkowski, Adam

    1999-01-01

    A research project is underway at NASA Lewis to produce a computer code which can accurately predict ice growth under any meteorological conditions for any aircraft surface. This report will present results from version 2.0 of this code, which is called LEWICE. This version differs from previous releases due to its robustness and its ability to reproduce results accurately for different spacing and time step criteria across computing platform. It also differs in the extensive amount of effort undertaken to compare the results in a quantified manner against the database of ice shapes which have been generated in the NASA Lewis Icing Research Tunnel (IRT). The results of the shape comparisons are analyzed to determine the range of meteorological conditions under which LEWICE 2.0 is within the experimental repeatability. This comparison shows that the average variation of LEWICE 2.0 from the experimental data is 7.2% while the overall variability of the experimental data is 2.5%.

  15. Out-of-plane buckling of pantographic fabrics in displacement-controlled shear tests: experimental results and model validation

    NASA Astrophysics Data System (ADS)

    Barchiesi, Emilio; Ganzosch, Gregor; Liebold, Christian; Placidi, Luca; Grygoruk, Roman; Müller, Wolfgang H.

    2018-01-01

    Due to the latest advancements in 3D printing technology and rapid prototyping techniques, the production of materials with complex geometries has become more affordable than ever. Pantographic structures, because of their attractive features, both in dynamics and statics and both in elastic and inelastic deformation regimes, deserve to be thoroughly investigated with experimental and theoretical tools. Herein, experimental results relative to displacement-controlled large deformation shear loading tests of pantographic structures are reported. In particular, five differently sized samples are analyzed up to first rupture. Results show that the deformation behavior is strongly nonlinear, and the structures are capable of undergoing large elastic deformations without reaching complete failure. Finally, a cutting edge model is validated by means of these experimental results.

  16. Breast cancer: determining the genetic profile from ultrasound-guided percutaneous biopsy specimens obtained during the diagnostic workups.

    PubMed

    López Ruiz, J A; Zabalza Estévez, I; Mieza Arana, J A

    2016-01-01

    To evaluate the possibility of determining the genetic profile of primary malignant tumors of the breast from specimens obtained by ultrasound-guided percutaneous biopsies during the diagnostic imaging workup. This is a retrospective study in 13 consecutive patients diagnosed with invasive breast cancer by B-mode ultrasound-guided 12 G core needle biopsy. After clinical indication, the pathologist decided whether the paraffin block specimens seemed suitable (on the basis of tumor size, validity of the sample, and percentage of tumor cells) before sending them for genetic analysis with the MammaPrint® platform. The size of the tumors on ultrasound ranged from 0.6cm to 5cm. In 11 patients the preserved specimen was considered valid and suitable for use in determining the genetic profile. In 1 patient (with a 1cm tumor) the pathologist decided that it was necessary to repeat the core biopsy to obtain additional samples. In 1 patient (with a 5cm tumor) the specimen was not considered valid by the genetic laboratory. The percentage of tumor cells in the samples ranged from 60% to 70%. In 11/13 cases (84.62%) it was possible to do the genetic analysis on the previously diagnosed samples. In most cases, regardless of tumor size, it is possible to obtain the genetic profile from tissue specimens obtained with ultrasound-guided 12 G core biopsy preserved in paraffin blocks. Copyright © 2015 SERAM. Published by Elsevier España, S.L.U. All rights reserved.

  17. Rocket-Based Combined Cycle Engine Technology Development: Inlet CFD Validation and Application

    NASA Technical Reports Server (NTRS)

    DeBonis, J. R.; Yungster, S.

    1996-01-01

    A CFD methodology has been developed for inlet analyses of Rocket-Based Combined Cycle (RBCC) Engines. A full Navier-Stokes analysis code, NPARC, was used in conjunction with pre- and post-processing tools to obtain a complete description of the flow field and integrated inlet performance. This methodology was developed and validated using results from a subscale test of the inlet to a RBCC 'Strut-Jet' engine performed in the NASA Lewis 1 x 1 ft. supersonic wind tunnel. Results obtained from this study include analyses at flight Mach numbers of 5 and 6 for super-critical operating conditions. These results showed excellent agreement with experimental data. The analysis tools were also used to obtain pre-test performance and operability predictions for the RBCC demonstrator engine planned for testing in the NASA Lewis Hypersonic Test Facility. This analysis calculated the baseline fuel-off internal force of the engine which is needed to determine the net thrust with fuel on.

  18. Torso-Tank Validation of High-Resolution Electrogastrography (EGG): Forward Modelling, Methodology and Results.

    PubMed

    Calder, Stefan; O'Grady, Greg; Cheng, Leo K; Du, Peng

    2018-04-27

    Electrogastrography (EGG) is a non-invasive method for measuring gastric electrical activity. Recent simulation studies have attempted to extend the current clinical utility of the EGG, in particular by providing a theoretical framework for distinguishing specific gastric slow wave dysrhythmias. In this paper we implement an experimental setup called a 'torso-tank' with the aim of expanding and experimentally validating these previous simulations. The torso-tank was developed using an adult male torso phantom with 190 electrodes embedded throughout the torso. The gastric slow waves were reproduced using an artificial current source capable of producing 3D electrical fields. Multiple gastric dysrhythmias were reproduced based on high-resolution mapping data from cases of human gastric dysfunction (gastric re-entry, conduction blocks and ectopic pacemakers) in addition to normal test data. Each case was recorded and compared to the previously-presented simulated results. Qualitative and quantitative analyses were performed to define the accuracy showing [Formula: see text] 1.8% difference, [Formula: see text] 0.99 correlation, and [Formula: see text] 0.04 normalised RMS error between experimental and simulated findings. These results reaffirm previous findings and these methods in unison therefore present a promising morphological-based methodology for advancing the understanding and clinical applications of EGG.

  19. Experimental validation of structural optimization methods

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.

    1992-01-01

    The topic of validating structural optimization methods by use of experimental results is addressed. The need for validating the methods as a way of effecting a greater and an accelerated acceptance of formal optimization methods by practicing engineering designers is described. The range of validation strategies is defined which includes comparison of optimization results with more traditional design approaches, establishing the accuracy of analyses used, and finally experimental validation of the optimization results. Examples of the use of experimental results to validate optimization techniques are described. The examples include experimental validation of the following: optimum design of a trussed beam; combined control-structure design of a cable-supported beam simulating an actively controlled space structure; minimum weight design of a beam with frequency constraints; minimization of the vibration response of helicopter rotor blade; minimum weight design of a turbine blade disk; aeroelastic optimization of an aircraft vertical fin; airfoil shape optimization for drag minimization; optimization of the shape of a hole in a plate for stress minimization; optimization to minimize beam dynamic response; and structural optimization of a low vibration helicopter rotor.

  20. Measuring the statistical validity of summary meta-analysis and meta-regression results for use in clinical practice.

    PubMed

    Willis, Brian H; Riley, Richard D

    2017-09-20

    An important question for clinicians appraising a meta-analysis is: are the findings likely to be valid in their own practice-does the reported effect accurately represent the effect that would occur in their own clinical population? To this end we advance the concept of statistical validity-where the parameter being estimated equals the corresponding parameter for a new independent study. Using a simple ('leave-one-out') cross-validation technique, we demonstrate how we may test meta-analysis estimates for statistical validity using a new validation statistic, Vn, and derive its distribution. We compare this with the usual approach of investigating heterogeneity in meta-analyses and demonstrate the link between statistical validity and homogeneity. Using a simulation study, the properties of Vn and the Q statistic are compared for univariate random effects meta-analysis and a tailored meta-regression model, where information from the setting (included as model covariates) is used to calibrate the summary estimate to the setting of application. Their properties are found to be similar when there are 50 studies or more, but for fewer studies Vn has greater power but a higher type 1 error rate than Q. The power and type 1 error rate of Vn are also shown to depend on the within-study variance, between-study variance, study sample size, and the number of studies in the meta-analysis. Finally, we apply Vn to two published meta-analyses and conclude that it usefully augments standard methods when deciding upon the likely validity of summary meta-analysis estimates in clinical practice. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  1. Stroke Impact Scale 3.0: Reliability and Validity Evaluation of the Korean Version

    PubMed Central

    2017-01-01

    Objective To establish the reliability and validity the Korean version of the Stroke Impact Scale (K-SIS) 3.0. Methods A total of 70 post-stroke patients were enrolled. All subjects were evaluated for general characteristics, Mini-Mental State Examination (MMSE), the National Institutes of Health Stroke Scale (NIHSS), Modified Barthel Index, Hospital Anxiety and Depression Scale (HADS). The SF-36 and K-SIS 3.0 assessed their health-related quality of life. Statistical analysis after evaluation, determined the reliability and validity of the K-SIS 3.0. Results A total of 70 patients (mean age, 54.97 years) participated in this study. Internal consistency of the SIS 3.0 (Cronbach's alpha) was obtained, and all domains had good co-efficiency, with threshold above 0.70. Test-retest reliability of SIS 3.0 required correlation (Spearman's rho) of the same domain scores obtained on the first and second assessments. Results were above 0.5, with the exception of social participation and mobility. Concurrent validity of K-SIS 3.0 was assessed using the SF-36, and other scales with the same or similar domains. Each domain of K-SIS 3.0 had a positive correlation with corresponding similar domain of SF-36 and other scales (HADS, MMSE, and NIHSS). Conclusion The newly developed K-SIS 3.0 showed high inter-intra reliability and test-retest reliabilities, together with high concurrent validity with the original and various other scales, for patients with stroke. K-SIS 3.0 can therefore be used for stroke patients, to assess their health-related quality of life and treatment efficacy. PMID:28758075

  2. PSI-Center Validation Studies

    NASA Astrophysics Data System (ADS)

    Nelson, B. A.; Akcay, C.; Glasser, A. H.; Hansen, C. J.; Jarboe, T. R.; Marklin, G. J.; Milroy, R. D.; Morgan, K. D.; Norgaard, P. C.; Shumlak, U.; Sutherland, D. A.; Victor, B. S.; Sovinec, C. R.; O'Bryan, J. B.; Held, E. D.; Ji, J.-Y.; Lukin, V. S.

    2014-10-01

    The Plasma Science and Innovation Center (PSI-Center - http://www.psicenter.org) supports collaborating validation platform experiments with 3D extended MHD simulations using the NIMROD, HiFi, and PSI-TET codes. Collaborators include the Bellan Plasma Group (Caltech), CTH (Auburn U), HBT-EP (Columbia), HIT-SI (U Wash-UW), LTX (PPPL), MAST (Culham), Pegasus (U Wisc-Madison), SSX (Swarthmore College), TCSU (UW), and ZaP/ZaP-HD (UW). The PSI-Center is exploring application of validation metrics between experimental data and simulations results. Biorthogonal decomposition (BOD) is used to compare experiments with simulations. BOD separates data sets into spatial and temporal structures, giving greater weight to dominant structures. Several BOD metrics are being formulated with the goal of quantitive validation. Results from these simulation and validation studies, as well as an overview of the PSI-Center status will be presented.

  3. Investigating Validity of Math 105 as Prerequisite to Math 201 among Undergraduate Students, Nigeria

    ERIC Educational Resources Information Center

    Zakariya, Yusuf F.

    2016-01-01

    In this study, the author examined the validity of MATH 105 as a prerequisite to MATH 201. The data for this study was extracted directly from the examination results logic of the university. Descriptive statistics in form of correlations and linear regressions were used to analyze the obtained data. Three research questions were formulated and…

  4. Validating the cross-cultural factor structure and invariance property of the Insomnia Severity Index: evidence based on ordinal EFA and CFA.

    PubMed

    Chen, Po-Yi; Yang, Chien-Ming; Morin, Charles M

    2015-05-01

    The purpose of this study is to examine the factor structure of the Insomnia Severity Index (ISI) across samples recruited from different countries. We tried to identify the most appropriate factor model for the ISI and further examined the measurement invariance property of the ISI across samples from different countries. Our analyses included one data set collected from a Taiwanese sample and two data sets obtained from samples in Hong Kong and Canada. The data set collected in Taiwan was analyzed with ordinal exploratory factor analysis (EFA) to obtain the appropriate factor model for the ISI. After that, we conducted a series of confirmatory factor analyses (CFAs), which is a special case of the structural equation model (SEM) that concerns the parameters in the measurement model, to the statistics collected in Canada and Hong Kong. The purposes of these CFA were to cross-validate the result obtained from EFA and further examine the cross-cultural measurement invariance of the ISI. The three-factor model outperforms other models in terms of global fit indices in Taiwan's population. Its external validity is also supported by confirmatory factor analyses. Furthermore, the measurement invariance analyses show that the strong invariance property between the samples from different cultures holds, providing evidence that the ISI results obtained in different cultures are comparable. The factorial validity of the ISI is stable in different populations. More importantly, its invariance property across cultures suggests that the ISI is a valid measure of the insomnia severity construct across countries. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Content Validity of National Post Marriage Educational Program Using Mixed Methods

    PubMed Central

    MOHAJER RAHBARI, Masoumeh; SHARIATI, Mohammad; KERAMAT, Afsaneh; YUNESIAN, Masoud; ESLAMI, Mohammad; MOUSAVI, Seyed Abbas; MONTAZERI, Ali

    2015-01-01

    Background: Although the validity of content of program is mostly conducted with qualitative methods, this study used both qualitative and quantitative methods for the validation of content of post marriage training program provided for newly married couples. Content validity is a preliminary step of obtaining authorization required to install the program in country's health care system. Methods: This mixed methodological content validation study carried out in four steps with forming three expert panels. Altogether 24 expert panelists were involved in 3 qualitative and quantitative panels; 6 in the first item development one; 12 in the reduction kind, 4 of them were common with the first panel, and 10 executive experts in the last one organized to evaluate psychometric properties of CVR and CVI and Face validity of 57 educational objectives. Results: The raw data of post marriage program had been written by professional experts of Ministry of Health, using qualitative expert panel, the content was more developed by generating 3 topics and refining one topic and its respective content. In the second panel, totally six other objectives were deleted, three for being out of agreement cut of point and three on experts' consensus. The validity of all items was above 0.8 and their content validity indices (0.8–1) were completely appropriate in quantitative assessment. Conclusion: This study provided a good evidence for validation and accreditation of national post marriage program planned for newly married couples in health centers of the country in the near future. PMID:26056672

  6. Validation of Pooled Whole-Genome Re-Sequencing in Arabidopsis lyrata.

    PubMed

    Fracassetti, Marco; Griffin, Philippa C; Willi, Yvonne

    2015-01-01

    Sequencing pooled DNA of multiple individuals from a population instead of sequencing individuals separately has become popular due to its cost-effectiveness and simple wet-lab protocol, although some criticism of this approach remains. Here we validated a protocol for pooled whole-genome re-sequencing (Pool-seq) of Arabidopsis lyrata libraries prepared with low amounts of DNA (1.6 ng per individual). The validation was based on comparing single nucleotide polymorphism (SNP) frequencies obtained by pooling with those obtained by individual-based Genotyping By Sequencing (GBS). Furthermore, we investigated the effect of sample number, sequencing depth per individual and variant caller on population SNP frequency estimates. For Pool-seq data, we compared frequency estimates from two SNP callers, VarScan and Snape; the former employs a frequentist SNP calling approach while the latter uses a Bayesian approach. Results revealed concordance correlation coefficients well above 0.8, confirming that Pool-seq is a valid method for acquiring population-level SNP frequency data. Higher accuracy was achieved by pooling more samples (25 compared to 14) and working with higher sequencing depth (4.1× per individual compared to 1.4× per individual), which increased the concordance correlation coefficient to 0.955. The Bayesian-based SNP caller produced somewhat higher concordance correlation coefficients, particularly at low sequencing depth. We recommend pooling at least 25 individuals combined with sequencing at a depth of 100× to produce satisfactory frequency estimates for common SNPs (minor allele frequency above 0.05).

  7. Initial validation and results of the Symptoms in Persons At Risk of Rheumatoid Arthritis (SPARRA) questionnaire: a EULAR project

    PubMed Central

    van Beers-Tas, Marian H; ter Wee, Marieke M; van Tuyl, Lilian H; Maat, Bertha; Hoogland, Wijnanda; Hensvold, Aase H; Catrina, Anca I; Mosor, Erika; Finckh, Axel; Courvoisier, Delphine S; Filer, Andrew; Sahbudin, Ilfita; Stack, Rebecca J; Raza, Karim; van Schaardenburg, Dirkjan

    2018-01-01

    Objectives To describe the development and assess the psychometric properties of the novel ‘Symptoms in Persons At Risk of Rheumatoid Arthritis’ (SPARRA) questionnaire in individuals at risk of rheumatoid arthritis (RA) and to quantify their symptoms. Methods The questionnaire items were derived from a qualitative study in patients with seropositive arthralgia. The questionnaire was administered to 219 individuals at risk of RA on the basis of symptoms or autoantibody positivity: 74% rheumatoid factor and/or anticitrullinated protein antibodies positive, 26% seronegative. Validity, reliability and responsiveness were assessed. Eighteen first degree relatives (FDR) of patients with RA were used for comparison. Results Face and content validity were high. The test-retest showed good agreement and reliability (1 week and 6 months). Overall, construct validity was low to moderate, with higher values for concurrent validity, suggesting that some questions reflect symptom content not captured with regular Visual Analogue Scale pain/well-being. Responsiveness was low (small subgroup). Finally, the burden of symptoms in both seronegative and seropositive at risk individuals was high, with pain, stiffness and fatigue being the most common ones with a major impact on daily functioning. The FDR cohort (mostly healthy individuals) showed a lower burden of symptoms; however, the distribution of symptoms was similar. Conclusions The SPARRA questionnaire has good psychometric properties and can add information to currently available clinical measures in individuals at risk of RA. The studied group had a high burden and impact of symptoms. Future studies should evaluate whether SPARRA data can improve the prediction of RA in at risk individuals.

  8. Biofeedback in Partial Weight Bearing: Validity of 3 Different Devices.

    PubMed

    van Lieshout, Remko; Stukstette, Mirelle J; de Bie, Rob A; Vanwanseele, Benedicte; Pisters, Martijn F

    2016-11-01

    Study Design Controlled laboratory study to assess criterion-related validity, with a cross-sectional within-subject design. Background Patients with orthopaedic conditions have difficulties complying with partial weight-bearing instructions. Technological advances have resulted in biofeedback devices that offer real-time feedback. However, the accuracy of these devices is mostly unknown. Inaccurate feedback can result in incorrect lower-limb loading and may lead to delayed healing. Objectives To investigate validity of peak force measurements obtained using 3 different biofeedback devices under varying levels of partial weight-bearing categories. Methods Validity of 3 biofeedback devices (OpenGo science, SmartStep, and SensiStep) was assessed. Healthy participants were instructed to walk at a self-selected speed with crutches under 3 different weight-bearing conditions, categorized as a percentage range of body weight: 1% to 20%, greater than 20% to 50%, and greater than 50% to 75%. Peak force data from the biofeedback devices were compared with the peak vertical ground reaction force measured with a force plate. Criterion validity was estimated using simple and regression-based Bland-Altman 95% limits of agreement and weighted kappas. Results Fifty-five healthy adults (58% male) participated. Agreement with the gold standard was substantial for the SmartStep, moderate for OpenGo science, and slight for SensiStep (weighted ± = 0.76, 0.58, and 0.19, respectively). For the 1% to 20% and greater than 20% to 50% weight-bearing categories, both the OpenGo science and SmartStep had acceptable limits of agreement. For the weight-bearing category greater than 50% to 75%, none of the devices had acceptable agreement. Conclusion The OpenGo science and SmartStep provided valid feedback in the lower weight-bearing categories, and the SensiStep showed poor validity of feedback in all weight-bearing categories. J Orthop Sports Phys Ther 2016;46(11):-1. Epub 12 Oct 2016. doi:10

  9. Standardization of a fluconazole bioassay and correlation of results with those obtained by high-pressure liquid chromatography.

    PubMed Central

    Rex, J H; Hanson, L H; Amantea, M A; Stevens, D A; Bennett, J E

    1991-01-01

    An improved bioassay for fluconazole was developed. This assay is sensitive in the clinically relevant range (2 to 40 micrograms/ml) and analyzes plasma, serum, and cerebrospinal fluid specimens; bioassay results correlate with results obtained by high-pressure liquid chromatography (HPLC). Bioassay and HPLC analyses of spiked plasma, serum, and cerebrospinal fluid samples (run as unknowns) gave good agreement with expected values. Analysis of specimens from patients gave equivalent results by both HPLC and bioassay. HPLC had a lower within-run coefficient of variation (less than 2.5% for HPLC versus less than 11% for bioassay) and a lower between-run coefficient of variation (less than 5% versus less than 12% for bioassay) and was more sensitive (lower limit of detection, 0.1 micrograms/ml [versus 2 micrograms/ml for bioassay]). The bioassay is, however, sufficiently accurate and sensitive for clinical specimens, and its relative simplicity, low sample volume requirement, and low equipment cost should make it the technique of choice for analysis of routine clinical specimens. PMID:1854166

  10. WHO Study on the reliability and validity of the alcohol and drug use disorder instruments: overview of methods and results.

    PubMed

    Ustün, B; Compton, W; Mager, D; Babor, T; Baiyewu, O; Chatterji, S; Cottler, L; Göğüş, A; Mavreas, V; Peters, L; Pull, C; Saunders, J; Smeets, R; Stipec, M R; Vrasti, R; Hasin, D; Room, R; Van den Brink, W; Regier, D; Blaine, J; Grant, B F; Sartorius, N

    1997-09-25

    The WHO Study on the reliability and validity of the alcohol and drug use disorder instruments in an international study which has taken place in centres in ten countries, aiming to test the reliability and validity of three diagnostic instruments for alcohol and drug use disorders: the Composite International Diagnostic Interview (CIDI), the Schedules for Clinical Assessment in Neuropsychiatry (SCAN) and a special version of the Alcohol Use Disorder and Associated Disabilities Interview schedule-alcohol/drug-revised (AUDADIS-ADR). The purpose of the reliability and validity (R&V) study is to further develop the alcohol and drug sections of these instruments so that a range of substance-related diagnoses can be made in a systematic, consistent, and reliable way. The study focuses on new criteria proposed in the tenth revision of the International Classification of Diseases (ICD-10) and the fourth revision of the diagnostic and statistical manual of mental disorders (DSM-IV) for dependence, harmful use and abuse categories for alcohol and psychoactive substance use disorders. A systematic study including a scientifically rigorous measure of reliability (i.e. 1 week test-retest reliability) and validity (i.e. comparison between clinical and non-clinical measures) has been undertaken. Results have yielded useful information on reliability and validity of these instruments at diagnosis, criteria and question level. Overall the diagnostic concordance coefficients (kappa, kappa) were very good for dependence disorders (0.7-0.9), but were somewhat lower for the abuse and harmful use categories. The comparisons among instruments and independent clinical evaluations and debriefing interviews gave important information about possible sources of unreliability, and provided useful clues on the applicability and consistency of nosological concepts across cultures.

  11. Validation of the Social Appearance Anxiety Scale: factor, convergent, and divergent validity.

    PubMed

    Levinson, Cheri A; Rodebaugh, Thomas L

    2011-09-01

    The Social Appearance Anxiety Scale (SAAS) was created to assess fear of overall appearance evaluation. Initial psychometric work indicated that the measure had a single-factor structure and exhibited excellent internal consistency, test-retest reliability, and convergent validity. In the current study, the authors further examined the factor, convergent, and divergent validity of the SAAS in two samples of undergraduates. In Study 1 (N = 323), the authors tested the factor structure, convergent, and divergent validity of the SAAS with measures of the Big Five personality traits, negative affect, fear of negative evaluation, and social interaction anxiety. In Study 2 (N = 118), participants completed a body evaluation that included measurements of height, weight, and body fat content. The SAAS exhibited excellent convergent and divergent validity with self-report measures (i.e., self-esteem, trait anxiety, ethnic identity, and sympathy), predicted state anxiety experienced during the body evaluation, and predicted body fat content. In both studies, results confirmed a single-factor structure as the best fit to the data. These results lend additional support for the use of the SAAS as a valid measure of social appearance anxiety.

  12. The Chinese version of the Outcome Expectations for Exercise scale: validation study.

    PubMed

    Lee, Ling-Ling; Chiu, Yu-Yun; Ho, Chin-Chih; Wu, Shu-Chen; Watson, Roger

    2011-06-01

    Estimates of the reliability and validity of the English nine-item Outcome Expectations for Exercise (OEE) scale have been tested and found to be valid for use in various settings, particularly among older people, with good internal consistency and validity. Data on the use of the OEE scale among older Chinese people living in the community and how cultural differences might affect the administration of the OEE scale are limited. To test the validity and reliability of the Chinese version of the Outcome Expectations for Exercise scale among older people. A cross-sectional validation study was designed to test the Chinese version of the OEE scale (OEE-C). Reliability was examined by testing both the internal consistency for the overall scale and the squared multiple correlation coefficient for the single item measure. The validity of the scale was tested on the basis of both a traditional psychometric test and a confirmatory factor analysis using structural equation modelling. The Mokken Scaling Procedure (MSP) was used to investigate if there were any hierarchical, cumulative sets of items in the measure. The OEE-C scale was tested in a group of older people in Taiwan (n=108, mean age=77.1). There was acceptable internal consistency (alpha=.85) and model fit in the scale. Evidence of the validity of the measure was demonstrated by the tests for criterion-related validity and construct validity. There was a statistically significant correlation between exercise outcome expectations and exercise self-efficacy (r=.34, p<.01). An analysis of the Mokken Scaling Procedure found that nine items of the scale were all retained in the analysis and the resulting scale was reliable and statistically significant (p=.0008). The results obtained in the present study provided acceptable levels of reliability and validity evidence for the Chinese Outcome Expectations for Exercise scale when used with older people in Taiwan. Future testing of the OEE-C scale needs to be carried out

  13. Increased efficacy for in-house validation of real-time PCR GMO detection methods.

    PubMed

    Scholtens, I M J; Kok, E J; Hougs, L; Molenaar, B; Thissen, J T N M; van der Voet, H

    2010-03-01

    To improve the efficacy of the in-house validation of GMO detection methods (DNA isolation and real-time PCR, polymerase chain reaction), a study was performed to gain insight in the contribution of the different steps of the GMO detection method to the repeatability and in-house reproducibility. In the present study, 19 methods for (GM) soy, maize canola and potato were validated in-house of which 14 on the basis of an 8-day validation scheme using eight different samples and five on the basis of a more concise validation protocol. In this way, data was obtained with respect to the detection limit, accuracy and precision. Also, decision limits were calculated for declaring non-conformance (>0.9%) with 95% reliability. In order to estimate the contribution of the different steps in the GMO analysis to the total variation variance components were estimated using REML (residual maximum likelihood method). From these components, relative standard deviations for repeatability and reproducibility (RSD(r) and RSD(R)) were calculated. The results showed that not only the PCR reaction but also the factors 'DNA isolation' and 'PCR day' are important factors for the total variance and should therefore be included in the in-house validation. It is proposed to use a statistical model to estimate these factors from a large dataset of initial validations so that for similar GMO methods in the future, only the PCR step needs to be validated. The resulting data are discussed in the light of agreed European criteria for qualified GMO detection methods.

  14. Brazilian Portuguese Validated Version of the Cardiac Anxiety Questionnaire

    PubMed Central

    Sardinha, Aline; Nardi, Antonio Egidio; de Araújo, Claudio Gil Soares; Ferreira, Maria Cristina; Eifert, Georg H.

    2013-01-01

    Background Cardiac Anxiety (CA) is the fear of cardiac sensations, characterized by recurrent anxiety symptoms, in patients with or without cardiovascular disease. The Cardiac Anxiety Questionnaire (CAQ) is a tool to assess CA, already adapted but not validated to Portuguese. Objective This paper presents the three phases of the validation studies of the Brazilian CAQ. Methods To extract the factor structure and assess the reliability of the CAQ (phase 1), 98 patients with coronary artery disease were recruited. The aim of phase 2 was to explore the convergent and divergent validity. Fifty-six patients completed the CAQ, along with the Body Sensations Questionnaire (BSQ) and the Social Phobia Inventory (SPIN). To determine the discriminative validity (phase 3), we compared the CAQ scores of two subgroups formed with patients from phase 1 (n = 98), according to the diagnoses of panic disorder and agoraphobia, obtained with the MINI - Mini International Neuropsychiatric Interview. Results A 2-factor solution was the most interpretable (46.4% of the variance). Subscales were named "Fear and Hypervigilance" (n = 9; alpha = 0.88), and "Avoidance", (n = 5; alpha = 0.82). Significant correlation was found between factor 1 and the BSQ total score (p < 0.01), but not with factor 2. SPIN factors showed significant correlations with CAQ subscales (p < 0.01). In phase 3, "Cardiac with panic" patients scored significantly higher in CAQ factor 1 (t = -3.42; p < 0.01, CI = -1.02 to -0.27), and higher, but not significantly different, in factor 2 (t = -1.98; p = 0.51, CI = -0.87 to 0.00). Conclusions These results provide a definite Brazilian validated version of the CAQ, adequate to clinical and research settings. PMID:24145391

  15. Three-dimensional computational fluid dynamics modelling and experimental validation of the Jülich Mark-F solid oxide fuel cell stack

    NASA Astrophysics Data System (ADS)

    Nishida, R. T.; Beale, S. B.; Pharoah, J. G.; de Haart, L. G. J.; Blum, L.

    2018-01-01

    This work is among the first where the results of an extensive experimental research programme are compared to performance calculations of a comprehensive computational fluid dynamics model for a solid oxide fuel cell stack. The model, which combines electrochemical reactions with momentum, heat, and mass transport, is used to obtain results for an established industrial-scale fuel cell stack design with complex manifolds. To validate the model, comparisons with experimentally gathered voltage and temperature data are made for the Jülich Mark-F, 18-cell stack operating in a test furnace. Good agreement is obtained between the model and experiment results for cell voltages and temperature distributions, confirming the validity of the computational methodology for stack design. The transient effects during ramp up of current in the experiment may explain a lower average voltage than model predictions for the power curve.

  16. [Valuating public health in some zoos in Colombia. Phase 1: designing and validating instruments].

    PubMed

    Agudelo-Suárez, Angela N; Villamil-Jiménez, Luis C

    2009-10-01

    Designing and validating instruments for identifying public health problems in some zoological parks in Colombia, thereby allowing them to be evaluated. Four instruments were designed and validated along with the participation of five zoos. The instruments were validated regarding appearance, content, sensitivity to change, reliability tests and determining the tools' usefulness. An evaluation scale was created which assigned a maximum of 400 points, having the following evaluation intervals: 350-400 points meant good public health management, 100-349 points for regular management and 0-99 points for deficient management. The instruments were applied to the five zoos as part of the validation, forming a base-line for future evaluation of public health in them. Four valid and useful instruments were obtained for evaluating public health in zoos in Colombia. The five zoos presented regular public health management. The base-line obtained when validating the instruments led to identifying strengths and weaknesses regarding public health management in the zoos. The instruments obtained generally and specifically evaluated public health management; they led to diagnosing, identifying, quantifying and scoring zoos in Colombia in terms of public health. The base-line provided a starting point for making comparisons and enabling future follow-up of public health in Colombian zoos.

  17. Validation of the Technological Process of the Preparation "Milk by Vidal".

    PubMed

    Savchenko, L P; Mishchenko, V A; Georgiyants, V A

    2017-01-01

    Validation was performed on the technological process of the compounded preparation "Milk by Vidal" in accordance with the requirements of the regulatory framework of Ukraine. Critical stages of formulation which can affect the quality of the finished preparation were considered during the research. The obtained results indicated that the quality of the finished preparation met the requirements of the State Pharmacopoeia of Ukraine. Copyright© by International Journal of Pharmaceutical Compounding, Inc.

  18. Comparison of Leishmania typing results obtained from 16 European clinical laboratories in 2014

    PubMed Central

    Van der Auwera, Gert; Bart, Aldert; Chicharro, Carmen; Cortes, Sofia; Davidsson, Leigh; Di Muccio, Trentina; Dujardin, Jean-Claude; Felger, Ingrid; Paglia, Maria Grazia; Grimm, Felix; Harms, Gundel; Jaffe, Charles L.; Manser, Monika; Ravel, Christophe; Robert-Gangneux, Florence; Roelfsema, Jeroen; Töz, Seray; Verweij, Jaco J.; Chiodini, Peter L.

    2016-01-01

    Leishmaniasis is endemic in southern Europe, and in other European countries cases are diagnosed in travellers who have visited affected areas both within the continent and beyond. Prompt and accurate diagnosis poses a challenge in clinical practice in Europe. Different methods exist for identification of the infecting Leishmania species. Sixteen clinical laboratories in 10 European countries, plus Israel and Turkey, conducted a study to assess their genotyping performance. DNA from 21 promastigote cultures of 13 species was analysed blindly by the routinely used typing method. Five different molecular targets were used, which were analysed with PCR-based methods. Different levels of identification were achieved, and either the Leishmania subgenus, species complex, or actual species were reported. The overall error rate of strains placed in the wrong complex or species was 8.5%. Various reasons for incorrect typing were identified. The study shows there is considerable room for improvement and standardisation of Leishmania typing. The use of well validated standard operating procedures is recommended, covering testing, interpretation, and reporting guidelines. Application of the internal transcribed spacer 1 of the rDNA array should be restricted to Old World samples, while the heat-shock protein 70 gene and the mini-exon can be applied globally. PMID:27983510

  19. Development and validation of a registry-based definition of eosinophilic esophagitis in Denmark

    PubMed Central

    Dellon, Evan S; Erichsen, Rune; Pedersen, Lars; Shaheen, Nicholas J; Baron, John A; Sørensen, Henrik T; Vyberg, Mogens

    2013-01-01

    AIM: To develop and validate a case definition of eosinophilic esophagitis (EoE) in the linked Danish health registries. METHODS: For case definition development, we queried the Danish medical registries from 2006-2007 to identify candidate cases of EoE in Northern Denmark. All International Classification of Diseases-10 (ICD-10) and prescription codes were obtained, and archived pathology slides were obtained and re-reviewed to determine case status. We used an iterative process to select inclusion/exclusion codes, refine the case definition, and optimize sensitivity and specificity. We then re-queried the registries from 2008-2009 to yield a validation set. The case definition algorithm was applied, and sensitivity and specificity were calculated. RESULTS: Of the 51 and 49 candidate cases identified in both the development and validation sets, 21 and 24 had EoE, respectively. Characteristics of EoE cases in the development set [mean age 35 years; 76% male; 86% dysphagia; 103 eosinophils per high-power field (eos/hpf)] were similar to those in the validation set (mean age 42 years; 83% male; 67% dysphagia; 77 eos/hpf). Re-review of archived slides confirmed that the pathology coding for esophageal eosinophilia was correct in greater than 90% of cases. Two registry-based case algorithms based on pathology, ICD-10, and pharmacy codes were successfully generated in the development set, one that was sensitive (90%) and one that was specific (97%). When these algorithms were applied to the validation set, they remained sensitive (88%) and specific (96%). CONCLUSION: Two registry-based definitions, one highly sensitive and one highly specific, were developed and validated for the linked Danish national health databases, making future population-based studies feasible. PMID:23382628

  20. Validity of "Hi_Science" as instructional media based-android refer to experiential learning model

    NASA Astrophysics Data System (ADS)

    Qamariah, Jumadi, Senam, Wilujeng, Insih

    2017-08-01

    Hi_Science is instructional media based-android in learning science on material environmental pollution and global warming. This study is aimed: (a) to show the display of Hi_Science that will be applied in Junior High School, and (b) to describe the validity of Hi_Science. Hi_Science as instructional media created with colaboration of innovative learning model and development of technology at the current time. Learning media selected is based-android and collaborated with experiential learning model as an innovative learning model. Hi_Science had adapted student worksheet by Taufiq (2015). Student worksheet had very good category by two expert lecturers and two science teachers (Taufik, 2015). This student worksheet is refined and redeveloped in android as an instructional media which can be used by students for learning science not only in the classroom, but also at home. Therefore, student worksheet which has become instructional media based-android must be validated again. Hi_Science has been validated by two experts. The validation is based on assessment of meterials aspects and media aspects. The data collection was done by media assessment instrument. The result showed the assessment of material aspects has obtained the average value 4,72 with percentage of agreement 96,47%, that means Hi_Science on the material aspects is in excellent category or very valid category. The assessment of media aspects has obtained the average value 4,53 with percentage of agreement 98,70%, that means Hi_Science on the media aspects is in excellent category or very valid category. It was concluded that Hi_Science as instructional media can be applied in the junior high school.

  1. Shift Verification and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pandya, Tara M.; Evans, Thomas M.; Davidson, Gregory G

    2016-09-07

    This documentation outlines the verification and validation of Shift for the Consortium for Advanced Simulation of Light Water Reactors (CASL). Five main types of problems were used for validation: small criticality benchmark problems; full-core reactor benchmarks for light water reactors; fixed-source coupled neutron-photon dosimetry benchmarks; depletion/burnup benchmarks; and full-core reactor performance benchmarks. We compared Shift results to measured data and other simulated Monte Carlo radiation transport code results, and found very good agreement in a variety of comparison measures. These include prediction of critical eigenvalue, radial and axial pin power distributions, rod worth, leakage spectra, and nuclide inventories over amore » burn cycle. Based on this validation of Shift, we are confident in Shift to provide reference results for CASL benchmarking.« less

  2. Validation of Rehabilitation Counseling Accreditation and Certification Knowledge Areas: Methodology and Initial Results.

    ERIC Educational Resources Information Center

    Szymanski, Edna Mora; And Others

    1993-01-01

    Conducted ongoing study to validate and update knowledge standards for rehabilitation counseling accreditation and certification, using descriptive, ex post facto, and time-series designs and three sampling frames. Findings from 1,025 counselors who renewed their certification in 1991 revealed that 52 of 55 knowledge standards were rated as at…

  3. Initial Retrieval Validation from the Joint Airborne IASI Validation Experiment (JAIVEx)

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Liu, Xu; Smith, WIlliam L.; Larar, Allen M.; Taylor, Jonathan P.; Revercomb, Henry E.; Mango, Stephen A.; Schluessel, Peter; Calbet, Xavier

    2007-01-01

    The Joint Airborne IASI Validation Experiment (JAIVEx) was conducted during April 2007 mainly for validation of the Infrared Atmospheric Sounding Interferometer (IASI) on the MetOp satellite, but also included a strong component focusing on validation of the Atmospheric InfraRed Sounder (AIRS) aboard the AQUA satellite. The cross validation of IASI and AIRS is important for the joint use of their data in the global Numerical Weather Prediction process. Initial inter-comparisons of geophysical products have been conducted from different aspects, such as using different measurements from airborne ultraspectral Fourier transform spectrometers (specifically, the NPOESS Airborne Sounder Testbed Interferometer (NAST-I) and the Scanning-High resolution Interferometer Sounder (S-HIS) aboard the NASA WB-57 aircraft), UK Facility for Airborne Atmospheric Measurements (FAAM) BAe146-301 aircraft insitu instruments, dedicated dropsondes, radiosondes, and ground based Raman Lidar. An overview of the JAIVEx retrieval validation plan and some initial results of this field campaign are presented.

  4. Validation of an Active Gear, Flexible Aircraft Take-off and Landing analysis (AGFATL)

    NASA Technical Reports Server (NTRS)

    Mcgehee, J. R.

    1984-01-01

    The results of an analytical investigation using a computer program for active gear, flexible aircraft take off and landing analysis (AGFATL) are compared with experimental data from shaker tests, drop tests, and simulated landing tests to validate the AGFATL computer program. Comparison of experimental and analytical responses for both passive and active gears indicates good agreement for shaker tests and drop tests. For the simulated landing tests, the passive and active gears were influenced by large strut binding friction forces. The inclusion of these undefined forces in the analytical simulations was difficult, and consequently only fair to good agreement was obtained. An assessment of the results from the investigation indicates that the AGFATL computer program is a valid tool for the study and initial design of series hydraulic active control landing gear systems.

  5. Validating the simulation of large-scale parallel applications using statistical characteristics

    DOE PAGES

    Zhang, Deli; Wilke, Jeremiah; Hendry, Gilbert; ...

    2016-03-01

    Simulation is a widely adopted method to analyze and predict the performance of large-scale parallel applications. Validating the hardware model is highly important for complex simulations with a large number of parameters. Common practice involves calculating the percent error between the projected and the real execution time of a benchmark program. However, in a high-dimensional parameter space, this coarse-grained approach often suffers from parameter insensitivity, which may not be known a priori. Moreover, the traditional approach cannot be applied to the validation of software models, such as application skeletons used in online simulations. In this work, we present a methodologymore » and a toolset for validating both hardware and software models by quantitatively comparing fine-grained statistical characteristics obtained from execution traces. Although statistical information has been used in tasks like performance optimization, this is the first attempt to apply it to simulation validation. Lastly, our experimental results show that the proposed evaluation approach offers significant improvement in fidelity when compared to evaluation using total execution time, and the proposed metrics serve as reliable criteria that progress toward automating the simulation tuning process.« less

  6. Low-Power Baseline Test Results for the GPU 3 Stirling Engine

    NASA Technical Reports Server (NTRS)

    Thieme, L. G.

    1979-01-01

    A 7.5 kW (10 hp) Stirling engine was converted to a research configuration in order to obtain data for validating Stirling-cycle computer simulations. Test results for a range of heater-tube gas temperatures, mean compression-space pressures, and engine speeds with both helium and hydrogen as the working fluid are summarized. An instrumentation system to determine indicated work is described and preliminary results are presented.

  7. Integrated Summary Report: Validation of Two Binding Assays ...

    EPA Pesticide Factsheets

    This Integrated Summary Report (ISR) summarizes, in a single document, the results from an international multi-laboratory validation study conducted for two in vitro estrogen receptor (ER) binding assays. These assays both use human recombinant estrogen receptor, alpha subtype (hrERα), to identify chemicals that may impact estrogen signaling through binding to the ER. The purpose of the ISR is to support the peer review of the findings obtained during the validation process.The two assays evaluated during this validation process are: The Freyberger-Wilson Assay (FW) using a full length human ER, and The Chemical Evaluation and Research Institute (CERI) Assay using a ligand-binding domain of the human ER.The two assays are mechanistically and functionally similar in that each measures the ability of a test chemical to competitively inhibit binding of [3H]17β-estradiol to the human recombinant ER. The essential elements of the FW and the CERI assays were developed at the laboratories of Bayer Pharma AG, Wuppertal, Germany (Freyberger et al., 2010) and CERI, Tokyo, Japan (Akahori et al., 2008), respectively.The ER competitive binding assay has long been in use, and is a well characterized approach, but historically uses rodent or other animal tissues as a source of the ER. Validation of the FW and CERI assays using human recombinant estrogen receptors ( subtype) will provide an updated alternative for the Agency’s current test guideline (OPPTS 89

  8. Validation of antibiotic residue tests for dairy goats.

    PubMed

    Zeng, S S; Hart, S; Escobar, E N; Tesfai, K

    1998-03-01

    The SNAP test, LacTek test (B-L and CEF), Charm Bacillus sterothermophilus var. calidolactis disk assay (BsDA), and Charm II Tablet Beta-lactam sequential test were validated using antibiotic-fortified and -incurred goat milk following the protocol for test kit validations of the U.S. Food and Drug Administration Center for Veterinary Medicine. SNAP, Charm BsDA, and Charm II Tablet Sequential tests were sensitive and reliable in detecting antibiotic residues in goat milk. All three assays showed greater than 90% sensitivity and specificity at tolerance and detection levels. However, caution should be taken in interpreting test results at detection levels. Because of the high sensitivity of these three tests, false-violative results could be obtained in goat milk containing antibiotic residues below the tolerance level. Goat milk testing positive by these tests must be confirmed using a more sophisticated methodology, such as high-performance liquid chromatography, before the milk is condemned. LacTek B-L test did not detect several antibiotics, including penicillin G, in goat milk at tolerance levels. However, LacTek CEF was excellent in detecting ceftiofur residue in goat milk.

  9. A content validity study of signs, symptoms and diseases/health problems expressed in LIBRAS.

    PubMed

    Aragão, Jamilly da Silva; de França, Inacia Sátiro Xavier; Coura, Alexsandro Silva; de Sousa, Francisco Stélio; Batista, Joana D'arc Lyra; Magalhães, Isabella Medeiros de Oliveira

    2015-01-01

    To validate the content of signs, symptoms and diseases/health problems expressed in LIBRAS for people with deafness. Method: Methodological development study, which involved 36 people with deafness and three LIBRAS specialists. The study was conducted in three stages: investigation of the signs, symptoms and diseases/health problems, referred to by people with deafness, reported in a questionnaire; video recordings of how people with deafness express, through LIBRA, the signs, symptoms and diseases/health problems; and validation of the contents of the recordings of the expressions by LIBRAS specialists. Data were processed in a spreadsheet and analyzed using univariate tables, with absolute frequencies and percentages. The validation results were analyzed using the Content Validity Index (CVI). 33 expressions in LIBRAS, of signs, symptoms and diseases/health problems were evaluated, and 28 expressions obtained a satisfactory CVI (1.00). The signs, symptoms and diseases/health problems expressed in LIBRAS presented validity, in the study region, for health professionals, especially nurses, for use in the clinical anamnesis of the nursing consultation for people with deafness.

  10. Validating Laboratory Results in Electronic Health Records

    PubMed Central

    Perrotta, Peter L.; Karcher, Donald S.

    2017-01-01

    Context Laboratories must ensure that the test results and pathology reports they transmit to a patient’s electronic health record (EHR) are accurate, complete, and presented in a useable format. Objective To determine the accuracy, completeness, and formatting of laboratory test results and pathology reports transmitted from the laboratory to the EHR. Design Participants from 45 institutions retrospectively reviewed results from 16 different laboratory tests, including clinical and anatomic pathology results, within the EHR used by their providers to view laboratory results. Results were evaluated for accuracy, presence of required elements, and usability. Both normal and abnormal results were reviewed for tests, some of which were performed in-house and others at a reference laboratory. Results Overall accuracy for test results transmitted to the EHR was greater than 99.3% (1052 of 1059). There was lower compliance for completeness of test results, with 69.6% (732 of 1051) of the test results containing all essential reporting elements. Institutions that had fewer than half of their orders entered electronically had lower test result completeness rates. The rate of appropriate formatting of results was 90.9% (98 of 1010). Conclusions The great majority of test results are accurately transmitted from the laboratory to the EHR; however, lower percentages are transmitted completely and in a useable format. Laboratories should verify the accuracy, completeness, and format of test results at the time of test implementation, after test changes, and periodically. PMID:27575266

  11. International normalized ratio (INR) testing in Europe: between-laboratory comparability of test results obtained by Quick and Owren reagents.

    PubMed

    Meijer, Piet; Kynde, Karin; van den Besselaar, Antonius M H P; Van Blerk, Marjan; Woods, Timothy A L

    2018-04-12

    This study was designed to obtain an overview of the analytical quality of the prothrombin time, reported as international normalized ratio (INR) and to assess the variation of INR results between European laboratories, the difference between Quick-type and Owren-type methods and the effect of using local INR calibration or not. In addition, we assessed the variation in INR results obtained for a single donation in comparison with a pool of several plasmas. A set of four different lyophilized plasma samples were distributed via national EQA organizations to participating laboratories for INR measurement. Between-laboratory variation was lower in the Owren group than in the Quick group (on average: 6.7% vs. 8.1%, respectively). Differences in the mean INR value between the Owren and Quick group were relatively small (<0.20 INR). Between-laboratory variation was lower after local INR calibration (CV: 6.7% vs. 8.6%). For laboratories performing local calibration, the between-laboratory variation was quite similar for the Owren and Quick group (on average: 6.5% and 6.7%, respectively). Clinically significant differences in INR results (difference in INR>0.5) were observed between different reagents. No systematic significant differences in the between-laboratory variation for a single-plasma sample and a pooled plasma sample were observed. The comparability for laboratories using local calibration of their thromboplastin reagent is better than for laboratories not performing local calibration. Implementing local calibration is strongly recommended for the measurement of INR.

  12. Validation of the WIMSD4M cross-section generation code with benchmark results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deen, J.R.; Woodruff, W.L.; Leal, L.E.

    1995-01-01

    The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section librariesmore » for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D{sub 2}O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less

  13. Validation of the WIMSD4M cross-section generation code with benchmark results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leal, L.C.; Deen, J.R.; Woodruff, W.L.

    1995-02-01

    The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment for Research and Test (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the procedure to generatemore » cross-section libraries for reactor analyses and calculations utilizing the WIMSD4M code. To do so, the results of calculations performed with group cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory(ORNL) unreflected critical spheres, the TRX critical experiments, and calculations of a modified Los Alamos highly-enriched heavy-water moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less

  14. Dynamic Assessment of Reading Difficulties: Predictive and Incremental Validity on Attitude toward Reading and the Use of Dialogue/Participation Strategies in Classroom Activities.

    PubMed

    Navarro, Juan-José; Lara, Laura

    2017-01-01

    Dynamic Assessment (DA) has been shown to have more predictive value than conventional tests for academic performance. However, in relation to reading difficulties, further research is needed to determine the predictive validity of DA for specific aspects of the different processes involved in reading and the differential validity of DA for different subgroups of students with an academic disadvantage. This paper analyzes the implementation of a DA device that evaluates processes involved in reading (EDPL) among 60 students with reading comprehension difficulties between 9 and 16 years of age, of whom 20 have intellectual disabilities, 24 have reading-related learning disabilities, and 16 have socio-cultural disadvantages. We specifically analyze the predictive validity of the EDPL device over attitude toward reading, and the use of dialogue/participation strategies in reading activities in the classroom during the implementation stage. We also analyze if the EDPL device provides additional information to that obtained with a conventionally applied personal-social adjustment scale (APSL). Results showed that dynamic scores, obtained from the implementation of the EDPL device, significantly predict the studied variables. Moreover, dynamic scores showed a significant incremental validity in relation to predictions based on an APSL scale. In relation to differential validity, the results indicated the superior predictive validity for DA for students with intellectual disabilities and reading disabilities than for students with socio-cultural disadvantages. Furthermore, the role of metacognition and its relation to the processes of personal-social adjustment in explaining the results is discussed.

  15. Dynamic Assessment of Reading Difficulties: Predictive and Incremental Validity on Attitude toward Reading and the Use of Dialogue/Participation Strategies in Classroom Activities

    PubMed Central

    Navarro, Juan-José; Lara, Laura

    2017-01-01

    Dynamic Assessment (DA) has been shown to have more predictive value than conventional tests for academic performance. However, in relation to reading difficulties, further research is needed to determine the predictive validity of DA for specific aspects of the different processes involved in reading and the differential validity of DA for different subgroups of students with an academic disadvantage. This paper analyzes the implementation of a DA device that evaluates processes involved in reading (EDPL) among 60 students with reading comprehension difficulties between 9 and 16 years of age, of whom 20 have intellectual disabilities, 24 have reading-related learning disabilities, and 16 have socio-cultural disadvantages. We specifically analyze the predictive validity of the EDPL device over attitude toward reading, and the use of dialogue/participation strategies in reading activities in the classroom during the implementation stage. We also analyze if the EDPL device provides additional information to that obtained with a conventionally applied personal-social adjustment scale (APSL). Results showed that dynamic scores, obtained from the implementation of the EDPL device, significantly predict the studied variables. Moreover, dynamic scores showed a significant incremental validity in relation to predictions based on an APSL scale. In relation to differential validity, the results indicated the superior predictive validity for DA for students with intellectual disabilities and reading disabilities than for students with socio-cultural disadvantages. Furthermore, the role of metacognition and its relation to the processes of personal-social adjustment in explaining the results is discussed. PMID:28243215

  16. Validity of EQ-5D in general population of Taiwan: results of the 2009 National Health Interview and Drug Abuse Survey of Taiwan.

    PubMed

    Yu, Sheng-Tsung; Chang, Hsing-Yi; Yao, Kai-Ping; Lin, Yu-Hsuan; Hurng, Baai-Shyun

    2015-10-01

    The aim of this study was to examine the validity of the EuroQOL five dimensions questionnaire (EQ-5D) using a nationally representative data from the National Health Interview Survey (NHIS) through comparison with short-form 36 (SF-36). Data for this study came from the 2009 NHIS in Taiwan. The study sample was the 4007 participants aged 20-64 years who completed the survey. We used SUDAAN 10.0 (SAS-Callable) to carry out weighed estimation and statistical inference. The EQ index was estimated using norm values from a Taiwanese study as well as from Japan and the United Kingdom (UK). The SF-36 score was standardized using American norm values. In terms of concurrent validity, the EQ-5D met the five hypotheses. The results did not fulfill hypothesis that women would have lower visual analogue scale (EQ-VAS) scores. In terms of discriminant validity, the EQ-5D fulfilled two hypotheses. Our results approached but did not fulfill hypothesis that there would be a weak association between the physical and psychological dimensions of the EQ-5D and the mental component summary score of the SF-36. Results were comparable regardless of whether the Japanese or UK norm value sets were used. We were able to fulfill many, not all of our validity hypotheses regardless of whether the established Japanese or UK norm value sets or the Taiwanese norm values were used. The EQ-5D is an effective and simple instrument for assessing health-related quality of life of general population in Taiwan.

  17. Towards development and validation of an intraoperative assessment tool for robot-assisted radical prostatectomy training: results of a Delphi study

    PubMed Central

    Morris, Christopher; Hoogenes, Jen; Shayegan, Bobby; Matsumoto, Edward D.

    2017-01-01

    ABSTRACT Introduction As urology training shifts toward competency-based frameworks, the need for tools for high stakes assessment of trainees is crucial. Validated assessment metrics are lacking for many robot-assisted radical prostatectomy (RARP). As it is quickly becoming the gold standard for treatment of localized prostate cancer, the development and validation of a RARP assessment tool for training is timely. Materials and methods We recruited 13 expert RARP surgeons from the United States and Canada to serve as our Delphi panel. Using an initial inventory developed via a modified Delphi process with urology residents, fellows, and staff at our institution, panelists iteratively rated each step and sub-step on a 5-point Likert scale of agreement for inclusion in the final assessment tool. Qualitative feedback was elicited for each item to determine proper step placement, wording, and suggestions. Results Panelist’s responses were compiled and the inventory was edited through three iterations, after which 100% consensus was achieved. The initial inventory steps were decreased by 13% and a skip pattern was incorporated. The final RARP stepwise inventory was comprised of 13 critical steps with 52 sub-steps. There was no attrition throughout the Delphi process. Conclusions Our Delphi study resulted in a comprehensive inventory of intraoperative RARP steps with excellent consensus. This final inventory will be used to develop a valid and psychometrically sound intraoperative assessment tool for use during RARP training and evaluation, with the aim of increasing competency of all trainees. PMID:28379668

  18. Validating Measures of Real-World Outcome: The Results of the VALERO Expert Survey and RAND Panel

    PubMed Central

    Leifker, Feea R.; Patterson, Thomas L.; Heaton, Robert K.; Harvey, Philip D.

    2011-01-01

    Background: People with schizophrenia demonstrate considerable discrepancy between self-reported functioning and informant reports. It is not clear whether these discrepancies originate from the instruments used or from the perspectives of different informants. The goal of the Validation of Everyday Real-World Outcomes (VALERO) Study is to enhance the measurement of real-world (RW) outcomes in the social, residential, and vocational domains through selection of optimal scales and informants using a multistep process similar to the Measurement and Treatment Research to Improve Cognition in Schizophrenia (MATRICS) initiative. Methods: Forty-eight experts provided their opinion regarding the best scales measuring RW outcomes. Fifty-nine measures were nominated. The investigators selected the 11 scales that were the most highly nominated, had the most published validity data, and best represented the domains of interest. Information was provided to other experts who served as RAND panelists. Panelists rated each measure for its suitability across multiple a priori domains. Discrepant ratings were discussed until consensus was reached. Results: Following the RAND Panel, the 2 scales that scored highest across the various criteria for each of the classes of scales (hybrid, social functioning, and everyday living skills) were selected for use in the first substudy of VALERO. The scales selected were the Quality-of-Life Scale, Specific Levels of Functioning Scale, Social Behavior Schedule, Social Functioning Scale, Independent Living Skills Schedule, and Life Skills Profile. Discussion: The results show that although there are significant limitations with current scales used for the assessment of RW outcome in schizophrenia, a consensus is possible. Further, several existing instruments were rated as useful for measuring social, residential, and vocational outcomes. PMID:19525354

  19. Real-Time Sensor Validation, Signal Reconstruction, and Feature Detection for an RLV Propulsion Testbed

    NASA Technical Reports Server (NTRS)

    Jankovsky, Amy L.; Fulton, Christopher E.; Binder, Michael P.; Maul, William A., III; Meyer, Claudia M.

    1998-01-01

    A real-time system for validating sensor health has been developed in support of the reusable launch vehicle program. This system was designed for use in a propulsion testbed as part of an overall effort to improve the safety, diagnostic capability, and cost of operation of the testbed. The sensor validation system was designed and developed at the NASA Lewis Research Center and integrated into a propulsion checkout and control system as part of an industry-NASA partnership, led by Rockwell International for the Marshall Space Flight Center. The system includes modules for sensor validation, signal reconstruction, and feature detection and was designed to maximize portability to other applications. Review of test data from initial integration testing verified real-time operation and showed the system to perform correctly on both hard and soft sensor failure test cases. This paper discusses the design of the sensor validation and supporting modules developed at LeRC and reviews results obtained from initial test cases.

  20. Validation of the AATSR L2 GSST product with in situ measurements from the M-AERI

    NASA Astrophysics Data System (ADS)

    Noyes, E.; Minnett, P.; Remedios, J.; Mannerings, B.; Corlett, G.; Edwards, M.; Llewellyn-Jones, D.

    Precise, in situ, measurements of skin Sea Surface Temperature (SSST) have been obtained over the Eastern Caribbean Sea, using the Marine Atmospheric Emitted Radiance Interferometer (M-AERI) deployed onboard the Explorer of the Seas cruise ship. These measurements provide a near-continuous SSST dataset and have been used to validate the Advanced Along-Track Scanning Radiometer (AATSR) Level 2 operational dual-view Gridded Sea Surface Temperature (GSST) product over the area. The (A)ATSR instrument has a unique design in that it has both a nadir- and forward-view, allowing the Earth's surface to be viewed along two different atmospheric path lengths and enabling an improved atmospheric correction to be made when retrieving measurements of SST. The infrared radiometer also uses an innovative and exceptionally stable on-board calibration system, which, together with actively cooled detectors, gives exceptionally high radiometric sensitivity and precision, enabling SSTs to be retrieved to within ± 0.3 K (1-sigma limit). The unprecedented number of measurements provided by the M-AERI project enables us to validate the AATSR SST products on a scale that has not been possible with its two predecessors, ATSR-1 and ATSR-2. Validation results obtained between September 2002 and September 2003 are presented and indicate that, although the AATSR appears to measure slightly warm (circa + 0.14 K), the GSST product is accurate to within 0.28-0.41 K (Root Mean Square difference) in this geographical region, depending on the validation criteria used. We also present the results of further investigations into a number of validation points that do not fall within the target ± 0.3 K accuracy zone.

  1. Visual Contrast Sensitivity Functions Obtained from Untrained Observers Using Tracking and Staircase Procedures. Final Report.

    ERIC Educational Resources Information Center

    Geri, George A.; Hubbard, David C.

    Two adaptive psychophysical procedures (tracking and "yes-no" staircase) for obtaining human visual contrast sensitivity functions (CSF) were evaluated. The procedures were chosen based on their proven validity and the desire to evaluate the practical effects of stimulus transients, since tracking procedures traditionally employ gradual…

  2. Measuring the statistical validity of summary meta‐analysis and meta‐regression results for use in clinical practice

    PubMed Central

    Riley, Richard D.

    2017-01-01

    An important question for clinicians appraising a meta‐analysis is: are the findings likely to be valid in their own practice—does the reported effect accurately represent the effect that would occur in their own clinical population? To this end we advance the concept of statistical validity—where the parameter being estimated equals the corresponding parameter for a new independent study. Using a simple (‘leave‐one‐out’) cross‐validation technique, we demonstrate how we may test meta‐analysis estimates for statistical validity using a new validation statistic, Vn, and derive its distribution. We compare this with the usual approach of investigating heterogeneity in meta‐analyses and demonstrate the link between statistical validity and homogeneity. Using a simulation study, the properties of Vn and the Q statistic are compared for univariate random effects meta‐analysis and a tailored meta‐regression model, where information from the setting (included as model covariates) is used to calibrate the summary estimate to the setting of application. Their properties are found to be similar when there are 50 studies or more, but for fewer studies Vn has greater power but a higher type 1 error rate than Q. The power and type 1 error rate of Vn are also shown to depend on the within‐study variance, between‐study variance, study sample size, and the number of studies in the meta‐analysis. Finally, we apply Vn to two published meta‐analyses and conclude that it usefully augments standard methods when deciding upon the likely validity of summary meta‐analysis estimates in clinical practice. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:28620945

  3. The Community of Inquiry Instrument: Validation and Results in Online Health Care Disciplines

    ERIC Educational Resources Information Center

    Carlon, S.; Bennett-Woods, D.; Berg, B.; Claywell, L.; LeDuc, K.; Marcisz, N.; Mulhall, M.; Noteboom, T.; Snedden, T.; Whalen, K.; Zenoni, L.

    2012-01-01

    This descriptive study using survey design sought to establish the efficacy of the Community of Inquiry instrument utilized in a study published by Shea and Bidjerano in 2009 exploring an online community of business students in a multi-institutional study. The current study sought to validate the instrument with a population of students in three…

  4. Aircraft and ground vehicle friction correlation test results obtained under winter runway conditions during joint FAA/NASA Runway Friction Program

    NASA Technical Reports Server (NTRS)

    Yager, Thomas J.; Vogler, William A.; Baldasare, Paul

    1988-01-01

    Aircraft and ground vehicle friction data collected during the Joint FAA/NASA Runway Friction Program under winter runway conditions are discussed and test results are summarized. The relationship between the different ground vehicle friction measurements obtained on compacted snow- and ice-covered conditions is defined together with the correlation to aircraft tire friction performance under similar runway conditions.

  5. Validation of a Monte Carlo simulation of the Inveon PET scanner using GATE

    NASA Astrophysics Data System (ADS)

    Lu, Lijun; Zhang, Houjin; Bian, Zhaoying; Ma, Jianhua; Feng, Qiangjin; Chen, Wufan

    2016-08-01

    The purpose of this study is to validate the application of GATE (Geant4 Application for Tomographic Emission) Monte Carlo simulation toolkit in order to model the performance characteristics of Siemens Inveon small animal PET system. The simulation results were validated against experimental/published data in accordance with the NEMA NU-4 2008 protocol for standardized evaluation of spatial resolution, sensitivity, scatter fraction (SF) and noise equivalent counting rate (NECR) of a preclinical PET system. An agreement of less than 18% was obtained between the radial, tangential and axial spatial resolutions of the simulated and experimental results. The simulated peak NECR of mouse-size phantom agreed with the experimental result, while for the rat-size phantom simulated value was higher than experimental result. The simulated and experimental SFs of mouse- and rat- size phantom both reached an agreement of less than 2%. It has been shown the feasibility of our GATE model to accurately simulate, within certain limits, all major performance characteristics of Inveon PET system.

  6. External Standards or Standard Addition? Selecting and Validating a Method of Standardization

    NASA Astrophysics Data System (ADS)

    Harvey, David T.

    2002-05-01

    A common feature of many problem-based laboratories in analytical chemistry is a lengthy independent project involving the analysis of "real-world" samples. Students research the literature, adapting and developing a method suitable for their analyte, sample matrix, and problem scenario. Because these projects encompass the complete analytical process, students must consider issues such as obtaining a representative sample, selecting a method of analysis, developing a suitable standardization, validating results, and implementing appropriate quality assessment/quality control practices. Most textbooks and monographs suitable for an undergraduate course in analytical chemistry, however, provide only limited coverage of these important topics. The need for short laboratory experiments emphasizing important facets of method development, such as selecting a method of standardization, is evident. The experiment reported here, which is suitable for an introductory course in analytical chemistry, illustrates the importance of matrix effects when selecting a method of standardization. Students also learn how a spike recovery is used to validate an analytical method, and obtain a practical experience in the difference between performing an external standardization and a standard addition.

  7. Criterion-related validity of self-reported stair climbing in older adults.

    PubMed

    Higueras-Fresnillo, Sara; Esteban-Cornejo, Irene; Gasque, Pablo; Veiga, Oscar L; Martinez-Gomez, David

    2018-02-01

    Stair climbing is an activity of daily living that might contribute to increase levels of physical activity (PA). To date, there is no study examining the validity of climbing stairs assessed by self-report. The aim of this study was, therefore, to examine the validity of estimated stair climbing from one question included in a common questionnaire compared to a pattern-recognition activity monitor in older adults. A total of 138 older adults (94 women), aged 65-86 years (70.9 ± 4.7 years), from the IMPACT65 + study participated in this validity study. Estimates of stair climbing were obtained from the European Prospective Investigation into Cancer and Nutrition (EPIC) PA questionnaire. An objective assessment of stair climbing was obtained with the Intelligent Device for Energy Expenditure and Activity (IDEEA) monitor. The correlation between both methods to assess stair climbing was fair (ρ = 0.22, p = 0.008 for PA energy expenditure and ρ = 0.26, p = 0.002 for duration). Mean differences between self-report and the IDEEA were 7.96 ± 10.52 vs. 9.88 ± 3.32 METs-min/day for PA energy expenditure, and 0.99 ± 1.32 vs. 1.79 ± 2.02 min/day for duration (both Wilcoxon test p < 0.001). Results from the Bland-Altman analysis indicate that bias between both instruments were -1.91 ± 10.30 METs-min/day and -0.80 ± 1.99 min/day, and corresponding limits of agreement for the two instruments were from 18.27 to -22.10 METs-min/day and from 3.09 to -4.70 min/day, respectively. Our results indicate that self-reported stair climbing has modest validity to accurately rank old age participants, and underestimates both PAEE and its duration, as compared with an objectively measured method.

  8. Domain of validity of the perturbative approach to femtosecond optical spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gelin, Maxim F.; Rao, B. Jayachander; Nest, Mathias

    2013-12-14

    We have performed numerical nonperturbative simulations of transient absorption pump-probe responses for a series of molecular model systems. The resulting signals as a function of the laser field strength and the pump-probe delay time are compared with those obtained in the perturbative response function formalism. The simulations and their theoretical analysis indicate that the perturbative description remains valid up to moderately strong laser pulses, corresponding to a rather substantial depopulation (population) of the initial (final) electronic states.

  9. Veggie Hardware Validation Test Preliminary Results and Lessons Learned

    NASA Technical Reports Server (NTRS)

    Massa, Gioia D.; Dufour, Nicole F.; Smith, T. M.

    2014-01-01

    The Veggie hardware validation test, VEG-01, was conducted on the International Space Station during Expeditions 39 and 40 from May through June of 2014. The Veggie hardware and the VEG-01 experiment payload were launched to station aboard the SpaceX-3 resupply mission in April, 2014. Veggie was installed in an Expedite-the-Processing-of-Experiments-to-Space-Station (ExPRESS) rack in the Columbus module, and the VEG-01 validation test was initiated. Veggie installation was successful, and power was supplied to the unit. The hardware was programmed and the root mat reservoir and plant pillows were installed without issue. As expected, a small amount of growth media was observed in the sealed bags which enclosed the plant pillows when they were destowed. Astronaut Steve Swanson used the wet/dry vacuum to clean up the escaped particles. Water insertion or priming the first plant pillow was unsuccessful as an issue prevented water movement through the quick disconnect. All subsequent pillows were successfully primed, and the initial pillow was replaced with a backup pillow and successfully primed. Six pillows were primed, but only five pillows had plants which germinated. After about a week and a half it was observed that plants were not growing well and that pillow wicks were dry. This indicated that the reservoir was not supplying sufficient water to the pillows via wicking, and so the team reverted to an operational fix which added water directly to the plant pillows. Direct watering of the pillows led to a recovery in several of the stressed plants; a couple of which did not recover. An important lesson learned involved Veggie's bellows. The bellows tended to float and interfere with operations when opened, so Steve secured them to the baseplate during plant tending operations. Due to the perceived intensity of the LED lights, the crew found it challenging to both work under the lights and read crew procedures on their computer. Although the lights are not a safety

  10. Assessment of bachelor's theses in a nursing degree with a rubrics system: Development and validation study.

    PubMed

    González-Chordá, Víctor M; Mena-Tudela, Desirée; Salas-Medina, Pablo; Cervera-Gasch, Agueda; Orts-Cortés, Isabel; Maciá-Soler, Loreto

    2016-02-01

    Writing a bachelor thesis (BT) is the last step to obtain a nursing degree. In order to perform an effective assessment of a nursing BT, certain reliable and valid tools are required. To develop and validate a 3-rubric system (drafting process, dissertation, and viva) to assess final year nursing students' BT. A multi-disciplinary study of content validity and psychometric properties. The study was carried out between December 2014 and July 2015. Nursing Degree at Universitat Jaume I. Spain. Eleven experts (9 nursing professors and 2 education professors from 6 different universities) took part in the development and content validity stages. Fifty-two theses presented during the 2014-2015 academic year were included by consecutive sampling of cases in order to study the psychometric properties. First, a group of experts was created to validate the content of the assessment system based on three rubrics (drafting process, dissertation, and viva). Subsequently, a reliability and validity study of the rubrics was carried out on the 52 theses presented during the 2014-2015 academic year. The BT drafting process rubric has 8 criteria (S-CVI=0.93; α=0.837; ICC=0.614), the dissertation rubric has 7 criteria (S-CVI=0.9; α=0.893; ICC=0.74), and the viva rubric has 4 criteria (S-CVI=0.86; α=8.16; ICC=0.895). A nursing BT assessment system based on three rubrics (drafting process, dissertation, and viva) has been validated. This system may be transferred to other nursing degrees or degrees from other academic areas. It is necessary to continue with the validation process taking into account factors that may affect the results obtained. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Effectiveness of Autologous Fat Grafting in Adherent Scars: Results Obtained by a Comprehensive Scar Evaluation Protocol.

    PubMed

    Jaspers, Mariëlle E H; Brouwer, Katrien M; van Trier, Antoine J M; Groot, Marloes L; Middelkoop, Esther; van Zuijlen, Paul P M

    2017-01-01

    Nowadays, patients normally survive severe traumas such as burn injuries and necrotizing fasciitis. Large skin defects can be closed but the scars remain. Scars may become adherent to underlying structures when the subcutical fat layer is damaged. Autologous fat grafting provides the possibility of reconstructing a functional sliding layer underneath the scar. Autologous fat grafting is becoming increasingly popular for scar treatment, although large studies using validated evaluation tools are lacking. The authors therefore objectified the effectiveness of single-treatment autologous fat grafting on scar pliability using validated scar measurement tools. Forty patients with adherent scars receiving single-treatment autologous fat grafting were measured preoperatively and at 3-month follow-up. The primary outcome parameter was scar pliability, measured using the Cutometer. Scar quality was also evaluated by the Patient and Observer Scar Assessment Scale and the DSM II ColorMeter. To prevent selection bias, measurements were performed following a standardized algorithm. The Cutometer parameters elasticity and maximal extension improved 22.5 percent (p < 0.001) and 15.6 percent (p = 0.001), respectively. Total Patient and Observer Scar Assessment Scale scores improved from 3.6 to 2.9 on the observer scale, and from 5.1 to 3.8 on the patient scale (both p < 0.001). Color differences between the scar and normal skin remained unaltered. For the first time, the effect of autologous fat grafting on functional scar parameters was ascertained using a comprehensive scar evaluation protocol. The improved scar pliability supports the authors' hypothesis that the function of the subcutis can be restored to a certain extent by single-treatment autologous fat grafting. Therapeutic, IV.

  12. Development and Validation of the Brief Esophageal Dysphagia Questionnaire

    PubMed Central

    Taft, Tiffany H.; Riehl, Megan; Sodikoff, Jamie B.; Kahrilas, Peter J.; Keefer, Laurie; Doerfler, Bethany; Pandolfino, John E.

    2017-01-01

    Background Esophageal dysphagia is common in gastroenterology practice and has multiple etiologies. A complication for some patients with dysphagia is food impaction. A valid and reliable questionnaire to rapidly evaluate esophageal dysphagia and impaction symptoms can aid the gastroenterologist in gathering information to inform treatment approach and further evaluation, including endoscopy. Methods 1,638 patients participated over two study phases. 744 participants completed the Brief Esophageal Dysphagia Questionnaire (BEDQ) for phase 1; 869 completed the BEDQ, Visceral Sensitivity Index, Gastroesophageal Reflux Disease Questionnaire, and Hospital Anxiety and Depression Scale for phase 2. Demographic and clinical data were obtained via the electronic medical record. The BEDQ was evaluated for internal consistency, split-half reliability, ceiling and floor effects, and construct validity. Key Results The BEDQ demonstrated excellent internal consistency, reliability, and construct validity. The symptom frequency and severity scales scored above the standard acceptable cutoffs for reliability while the impaction subscale yielded poor internal consistency and split-half reliability; thus the impaction items were deemed qualifiers only and removed from the total score. No significant ceiling or floor effects were found with the exception of 1 item, and inter-item correlations fell within accepted ranges. Construct validity was supported by moderate yet significant correlations with other measures. The predictive ability of the BEDQ was small but significant. Conclusions & Inferences The BEDQ represents a rapid, reliable and valid assessment tool for esophageal dysphagia with food impaction for clinical practice that differentiates between patients with major motor dysfunction and mechanical obstruction. PMID:27380834

  13. Analytical validation of an explicit finite element model of a rolling element bearing with a localised line spall

    NASA Astrophysics Data System (ADS)

    Singh, Sarabjeet; Howard, Carl Q.; Hansen, Colin H.; Köpke, Uwe G.

    2018-03-01

    In this paper, numerically modelled vibration response of a rolling element bearing with a localised outer raceway line spall is presented. The results were obtained from a finite element (FE) model of the defective bearing solved using an explicit dynamics FE software package, LS-DYNA. Time domain vibration signals of the bearing obtained directly from the FE modelling were processed further to estimate time-frequency and frequency domain results, such as spectrogram and power spectrum, using standard signal processing techniques pertinent to the vibration-based monitoring of rolling element bearings. A logical approach to analyses of the numerically modelled results was developed with an aim to presenting the analytical validation of the modelled results. While the time and frequency domain analyses of the results show that the FE model generates accurate bearing kinematics and defect frequencies, the time-frequency analysis highlights the simulation of distinct low- and high-frequency characteristic vibration signals associated with the unloading and reloading of the rolling elements as they move in and out of the defect, respectively. Favourable agreement of the numerical and analytical results demonstrates the validation of the results from the explicit FE modelling of the bearing.

  14. The LANDSAT system operated in Brazil by CNPq/INPE - results obtained in the area of mapping and future perspectives

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Barbosa, M. N.

    1981-01-01

    The LANDSAT system, operated in the country by CNPg/INPE since 1973, systematically acquires, produces, and distributes both multispectral and panchromatic images obtained through remote sensing satellites to thousands of researchers and technicians involved in the natural resources survey. To cooperate in the solution of national problems, CNPq/INPE is developing efforts in the area of manipulation of those images with the objective of making them useful as planimetric bases for the simple revision of already published maps or for its utilization as basic material in regions not yet reliability mapped. The results obtained from performed tests are presented and the existing limitations are discussed. The new system purchased to handle data from the next series of LANDSAT as well as from MAPSAT and SPOT which will be in operation within the 80's decade, and are designed not only for natural resources survey but also for the solution of cartographic problems.

  15. The definition and evaluation of the skills required to obtain a patient's history of illness: the use of videotape recordings

    PubMed Central

    Anderson, J.; Dowling, M. A. C.; Day, J. L.; Pettingale, K. W.

    1970-01-01

    Videotape recording apparatus was used to make records of case histories obtained from patients by students and doctors. These records were studied in order to identify the skills required to obtain a patient's history of illness. Each skill was defined. A questionnaire was developed in order to assess these skills and three independent observers watched the records of eighteen students and completed a questionnaire for each. The results of this were analysed for reliability and reproducibility between examiners. Moderate reliability and reproducibility were demonstrated. The questionnaire appeared to be a valid method of assessment and was capable of providing significant discrimination between students for each skill. A components analysis suggested that the marks for each skill depend on an overall impression obtained by each examiner and this overall impression is influenced by different skills for each examiner. PMID:5488220

  16. Modelling and validation of Proton exchange membrane fuel cell (PEMFC)

    NASA Astrophysics Data System (ADS)

    Mohiuddin, A. K. M.; Basran, N.; Khan, A. A.

    2018-01-01

    This paper is the outcome of a small scale fuel cell project. Fuel cell is an electrochemical device that converts energy from chemical reaction to electrical work. Proton Exchange Membrane Fuel Cell (PEMFC) is one of the different types of fuel cell, which is more efficient, having low operational temperature and fast start up capability results in high energy density. In this study, a mathematical model of 1.2 W PEMFC is developed and simulated using MATLAB software. This model describes the PEMFC behaviour under steady-state condition. This mathematical modeling of PEMFC determines the polarization curve, power generated, and the efficiency of the fuel cell. Simulation results were validated by comparing with experimental results obtained from the test of a single PEMFC with a 3 V motor. The performance of experimental PEMFC is little lower compared to simulated PEMFC, however both results were found in good agreement. Experiments on hydrogen flow rate also been conducted to obtain the amount of hydrogen consumed to produce electrical work on PEMFC.

  17. [Validation of the Scale of Hope in Terminal Illness for relatives brief version (SHTI-b). Validity and reliability analysis.

    PubMed

    Villacieros, M; Bermejo, J C; Hassoun, H

    2017-12-29

    Bermejo and Villacieros' Scale of Hope in Terminal Disease (SHTD) specifically collects meanings of hope facing terminal disease, including considerations relating to psycho-emotional support and that have a transcendental sense. The objective of this paper is to validate the SHTD abbreviated and rephrased to adapt all the items to a single domain. Starting from the published SHTD, an exploratory factor analysis (EFA) was carried out with a sample of 177 valid questionnaires. In a second study, with another sample of 180 valid questionnaires, a confirmatory factor analysis (CFA) and a correlation analysis with other measurements of spiritual wellbeing (Functional Assessment of Chronic Illness Therapy-Sp) and hope (Herth Hope Index) were done. A bidimensional model with satisfactory goodness of fit index values was obtained (GFI = 0.991; CFI = 0.984; SRMR = 0.08; RMSEA = 0.057); the Relations of Transcendence factor obtained a Cronbach's alpha of 0.872 and Personal Relations an alpha of 0.762. The correlations of the SHTI-rb with external measures were: r = 0.527with FACIT; r = 0.266 with HHI; r = 0.667 with the Spirituality subscale of FACIT; and r = 0.348 with the Interrelation factor of HHI. The Relations of Transcendence subscale correlated with both Layout and Expectation and Interrelation of HHI (r = 0.162 and r = 0.329 respectively), while the scale of Personal Relations only correlated with Interrelation of HHI (r = 0.244). The Scale of Hope in Terminal Illness for relatives (brief version) is a valid and reliable specific instrument for terminal patients.

  18. Experimental validation of the intrinsic spatial efficiency method over a wide range of sizes for cylindrical sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortiz-Ramŕez, Pablo, E-mail: rapeitor@ug.uchile.cl; Larroquette, Philippe; Camilla, S.

    The intrinsic spatial efficiency method is a new absolute method to determine the efficiency of a gamma spectroscopy system for any extended source. In the original work the method was experimentally demonstrated and validated for homogeneous cylindrical sources containing {sup 137}Cs, whose sizes varied over a small range (29.5 mm radius and 15.0 to 25.9 mm height). In this work we present an extension of the validation over a wide range of sizes. The dimensions of the cylindrical sources vary between 10 to 40 mm height and 8 to 30 mm radius. The cylindrical sources were prepared using the referencemore » material IAEA-372, which had a specific activity of 11320 Bq/kg at july 2006. The obtained results were better for the sources with 29 mm radius showing relative bias lesser than 5% and for the sources with 10 mm height showing relative bias lesser than 6%. In comparison with the obtained results in the work where we present the method, the majority of these results show an excellent agreement.« less

  19. A Complete Reporting of MCNP6 Validation Results for Electron Energy Deposition in Single-Layer Extended Media for Source Energies <= 1-MeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dixon, David A.; Hughes, Henry Grady

    In this paper, we expand on previous validation work by Dixon and Hughes. That is, we present a more complete suite of validation results with respect to to the well-known Lockwood energy deposition experiment. Lockwood et al. measured energy deposition in materials including beryllium, carbon, aluminum, iron, copper, molybdenum, tantalum, and uranium, for both single- and multi-layer 1-D geometries. Source configurations included mono-energetic, mono-directional electron beams with energies of 0.05-MeV, 0.1-MeV, 0.3- MeV, 0.5-MeV, and 1-MeV, in both normal and off-normal angles of incidence. These experiments are particularly valuable for validating electron transport codes, because they are closely represented bymore » simulating pencil beams incident on 1-D semi-infinite slabs with and without material interfaces. Herein, we include total energy deposition and energy deposition profiles for the single-layer experiments reported by Lockwood et al. (a more complete multi-layer validation will follow in another report).« less

  20. On the validity of the Arrhenius equation for electron attachment rate coefficients.

    PubMed

    Fabrikant, Ilya I; Hotop, Hartmut

    2008-03-28

    The validity of the Arrhenius equation for dissociative electron attachment rate coefficients is investigated. A general analysis allows us to obtain estimates of the upper temperature bound for the range of validity of the Arrhenius equation in the endothermic case and both lower and upper bounds in the exothermic case with a reaction barrier. The results of the general discussion are illustrated by numerical examples whereby the rate coefficient, as a function of temperature for dissociative electron attachment, is calculated using the resonance R-matrix theory. In the endothermic case, the activation energy in the Arrhenius equation is close to the threshold energy, whereas in the case of exothermic reactions with an intermediate barrier, the activation energy is found to be substantially lower than the barrier height.

  1. Ground validation of DPR precipitation rate over Italy using H-SAF validation methodology

    NASA Astrophysics Data System (ADS)

    Puca, Silvia; Petracca, Marco; Sebastianelli, Stefano; Vulpiani, Gianfranco

    2017-04-01

    The H-SAF project (Satellite Application Facility on support to Operational Hydrology and Water Management, funded by EUMETSAT) is aimed at retrieving key hydrological parameters such as precipitation, soil moisture and snow cover. Within the H-SAF consortium, the Product Precipitation Validation Group (PPVG) evaluate the accuracy of instantaneous and accumulated precipitation products with respect to ground radar and rain gauge data adopting the same methodology (using a Unique Common Code) throughout Europe. The adopted validation methodology can be summarized by the following few steps: (1) ground data (radar and rain gauge) quality control; (2) spatial interpolation of rain gauge measurements; (3) up-scaling of radar data to satellite native grid; (4) temporal comparison of satellite and ground-based precipitation products; and (5) production and evaluation of continuous and multi-categorical statistical scores for long time series and case studies. The statistical scores are evaluated taking into account the satellite product native grid. With the recent advent of the GPM era starting in march 2014, more new global precipitation products are available. The validation methodology developed in H-SAF can be easily applicable to different precipitation products. In this work, we have validated instantaneous precipitation data estimated from DPR (Dual-frequency Precipitation Radar) instrument onboard of the GPM-CO (Global Precipitation Measurement Core Observatory) satellite. In particular, we have analyzed the near surface and estimated precipitation fields collected in the 2A-Level for 3 different scans (NS, MS and HS). The Italian radar mosaic managed by the National Department of Civil Protection available operationally every 10 minutes is used as ground reference data. The results obtained highlight the capability of the DPR to identify properly the precipitation areas with higher accuracy in estimating the stratiform precipitation (especially for the HS). An

  2. Multifactor Screener in the 2000 National Health Interview Survey Cancer Control Supplement: Validation Results

    Cancer.gov

    Risk Factor Assessment Branch (RFAB) staff have assessed the validity of the Multifactor Screener in several studies: NCI's Observing Protein and Energy (OPEN) Study, the Eating at America's Table Study (EATS), and the joint NIH-AARP Diet and Health Study.

  3. Validation of gamma irradiator controls for quality and regulatory compliance

    NASA Astrophysics Data System (ADS)

    Harding, Rorry B.; Pinteric, Francis J. A.

    1995-09-01

    Since 1978 the U.S. Food and Drug Administration (FDA) has had both the legal authority and the Current Good Manufacturing Practice (CGMP) regulations in place to require irradiator owners who process medical devices to produce evidence of Irradiation Process Validation. One of the key components of Irradiation Process Validation is the validation of the irradiator controls. However, it is only recently that FDA audits have focused on this component of the process validation. What is Irradiator Control System Validation? What constitutes evidence of control? How do owners obtain evidence? What is the irradiator supplier's role in validation? How does the ISO 9000 Quality Standard relate to the FDA's CGMP requirement for evidence of Control System Validation? This paper presents answers to these questions based on the recent experiences of Nordion's engineering and product management staff who have worked with several US-based irradiator owners. This topic — Validation of Irradiator Controls — is a significant regulatory compliance and operations issue within the irradiator suppliers' and users' community.

  4. Experimental Validation of Lightning-Induced Electromagnetic (Indirect) Coupling to Short Monopole Antennas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crull, E W; Brown Jr., C G; Perkins, M P

    2008-07-30

    For short monopoles in this low-power case, it has been shown that a simple circuit model is capable of accurate predictions for the shape and magnitude of the antenna response to lightning-generated electric field coupling effects, provided that the elements of the circuit model have accurate values. Numerical EM simulation can be used to provide more accurate values for the circuit elements than the simple analytical formulas, since the analytical formulas are used outside of their region of validity. However, even with the approximate analytical formulas the simple circuit model produces reasonable results, which would improve if more accurate analyticalmore » models were used. This report discusses the coupling analysis approaches taken to understand the interaction between a time-varying EM field and a short monopole antenna, within the context of lightning safety for nuclear weapons at DOE facilities. It describes the validation of a simple circuit model using laboratory study in order to understand the indirect coupling of energy into a part, and the resulting voltage. Results show that in this low-power case, the circuit model predicts peak voltages within approximately 32% using circuit component values obtained from analytical formulas and about 13% using circuit component values obtained from numerical EM simulation. We note that the analytical formulas are used outside of their region of validity. First, the antenna is insulated and not a bare wire and there are perhaps fringing field effects near the termination of the outer conductor that the formula does not take into account. Also, the effective height formula is for a monopole directly over a ground plane, while in the time-domain measurement setup the monopole is elevated above the ground plane by about 1.5-inch (refer to Figure 5).« less

  5. Reliability and Validity Study of the Chamorro Assisted Gait Scale for People with Sprained Ankles, Walking with Forearm Crutches

    PubMed Central

    Ridao-Fernández, Carmen; Ojeda, Joaquín; Benítez-Lugo, Marisa; Sevillano, José Luis

    2016-01-01

    Objective The aim of this study was to design and validate a functional assessment scale for assisted gait with forearm crutches (Chamorro Assisted Gait Scale—CHAGS) and to assess its reliability in people with sprained ankles. Design Thirty subjects who suffered from sprained ankle (anterior talofibular ligament first and second degree) were included in the study. A modified Delphi technique was used to obtain the content validity. The selected items were: pelvic and scapular girdle dissociation(1), deviation of Center of Gravity(2), crutch inclination(3), steps rhythm(4), symmetry of step length(5), cross support(6), simultaneous support of foot and crutch(7), forearm off(8), facing forward(9) and fluency(10). Two raters twice visualized the gait of the sample subjects which were recorded. The criterion-related validity was determined by correlation between CHAGS and Coding of eight criteria of qualitative gait analysis (Viel Coding). Internal consistency and inter and intra-rater reliability were also tested. Results CHAGS obtained a high and negative correlation with Viel Coding. We obtained a good internal consistency and the intra-class correlation coefficients oscillated between 0.97 and 0.99, while the minimal detectable changes were acceptable. Conclusion CHAGS scale is a valid and reliable tool for assessing assisted gait with crutches in people with sprained ankles to perform partial relief of lower limbs. PMID:27168236

  6. Validity of a digital diet estimation method for use with preschool children

    USDA-ARS?s Scientific Manuscript database

    The validity of using the Remote Food Photography Method (RFPM) for measuring food intake of minority preschool children's intake is not well documented. The aim of the study was to determine the validity of intake estimations made by human raters using the RFPM compared with those obtained by weigh...

  7. Reliability and validity of quantifying absolute muscle hardness using ultrasound elastography.

    PubMed

    Chino, Kentaro; Akagi, Ryota; Dohi, Michiko; Fukashiro, Senshi; Takahashi, Hideyuki

    2012-01-01

    Muscle hardness is a mechanical property that represents transverse muscle stiffness. A quantitative method that uses ultrasound elastography for quantifying absolute human muscle hardness has been previously devised; however, its reliability and validity have not been completely verified. This study aimed to verify the reliability and validity of this quantitative method. The Young's moduli of seven tissue-mimicking materials (in vitro; Young's modulus range, 20-80 kPa; increments of 10 kPa) and the human medial gastrocnemius muscle (in vivo) were quantified using ultrasound elastography. On the basis of the strain/Young's modulus ratio of two reference materials, one hard and one soft (Young's moduli of 7 and 30 kPa, respectively), the Young's moduli of the tissue-mimicking materials and medial gastrocnemius muscle were calculated. The intra- and inter-investigator reliability of the method was confirmed on the basis of acceptably low coefficient of variations (≤6.9%) and substantially high intraclass correlation coefficients (≥0.77) obtained from all measurements. The correlation coefficient between the Young's moduli of the tissue-mimicking materials obtained using a mechanical method and ultrasound elastography was 0.996, which was equivalent to values previously obtained using magnetic resonance elastography. The Young's moduli of the medial gastrocnemius muscle obtained using ultrasound elastography were within the range of values previously obtained using magnetic resonance elastography. The reliability and validity of the quantitative method for measuring absolute muscle hardness using ultrasound elastography were thus verified.

  8. Specification Reformulation During Specification Validation

    NASA Technical Reports Server (NTRS)

    Benner, Kevin M.

    1992-01-01

    The goal of the ARIES Simulation Component (ASC) is to uncover behavioral errors by 'running' a specification at the earliest possible points during the specification development process. The problems to be overcome are the obvious ones the specification may be large, incomplete, underconstrained, and/or uncompilable. This paper describes how specification reformulation is used to mitigate these problems. ASC begins by decomposing validation into specific validation questions. Next, the specification is reformulated to abstract out all those features unrelated to the identified validation question thus creating a new specialized specification. ASC relies on a precise statement of the validation question and a careful application of transformations so as to preserve the essential specification semantics in the resulting specialized specification. This technique is a win if the resulting specialized specification is small enough so the user my easily handle any remaining obstacles to execution. This paper will: (1) describe what a validation question is; (2) outline analysis techniques for identifying what concepts are and are not relevant to a validation question; and (3) identify and apply transformations which remove these less relevant concepts while preserving those which are relevant.

  9. The Reliability and Validity of the Computerized Double Inclinometer in Measuring Lumbar Mobility

    PubMed Central

    MacDermid, Joy Christine; Arumugam, Vanitha; Vincent, Joshua Israel; Carroll, Krista L

    2014-01-01

    Study Design : Repeated measures reliability/validity study. Objectives : To determine the concurrent validity, test-retest, inter-rater and intra-rater reliability of lumbar flexion and extension measurements using the Tracker M.E. computerized dual inclinometer (CDI) in comparison to the modified-modified Schober (MMS) Summary of Background : Numerous studies have evaluated the reliability and validity of the various methods of measuring spinal motion, but the results are inconsistent. Differences in equipment and techniques make it difficult to correlate results. Methods : Twenty subjects with back pain and twenty without back pain were selected through convenience sampling. Two examiners measured sagittal plane lumbar range of motion for each subject. Two separate tests with the CDI and one test with the MMS were conducted. Each test consisted of three trials. Instrument and examiner order was randomly assigned. Intra-class correlations (ICCs 2, 2 and 2, 2) and Pearson correlation coefficients (r) were used to calculate reliability and concurrent validity respectively. Results : Intra-trial reliability was high to very high for both the CDI (ICCs 0.85 - 0.96) and MMS (ICCs 0.84 - 0.98). However, the reliability was poor to moderate, when the CDI unit had to be repositioned either by the same rate (ICCs 0.16 - 0.59) or a different rater (ICCs 0.45 - 0.52). Inter-rater reliability for the MMS was moderate to high (ICCs 0.75 - 0.82) which bettered the moderate correlation obtained for the CDI (ICCs 0.45 - 0.52). Correlations between the CDI and MMS were poor for flexion (0.32; p<0.05) and poor to moderate (-0.42 - -0.51; p<0.05) for extension measurements. Conclusion : When using the CDI, an average of subsequent tests is required to obtain moderate reliability. The MMS was highly reliable than the CDI. The MMS and the CDI measure lumbar movement on a different metric that are not highly related to each other. PMID:25352928

  10. Validation of a scenario-based assessment of critical thinking using an externally validated tool.

    PubMed

    Buur, Jennifer L; Schmidt, Peggy; Smylie, Dean; Irizarry, Kris; Crocker, Carlos; Tyler, John; Barr, Margaret

    2012-01-01

    With medical education transitioning from knowledge-based curricula to competency-based curricula, critical thinking skills have emerged as a major competency. While there are validated external instruments for assessing critical thinking, many educators have created their own custom assessments of critical thinking. However, the face validity of these assessments has not been challenged. The purpose of this study was to compare results from a custom assessment of critical thinking with the results from a validated external instrument of critical thinking. Students from the College of Veterinary Medicine at Western University of Health Sciences were administered a custom assessment of critical thinking (ACT) examination and the externally validated instrument, California Critical Thinking Skills Test (CCTST), in the spring of 2011. Total scores and sub-scores from each exam were analyzed for significant correlations using Pearson correlation coefficients. Significant correlations between ACT Blooms 2 and deductive reasoning and total ACT score and deductive reasoning were demonstrated with correlation coefficients of 0.24 and 0.22, respectively. No other statistically significant correlations were found. The lack of significant correlation between the two examinations illustrates the need in medical education to externally validate internal custom assessments. Ultimately, the development and validation of custom assessments of non-knowledge-based competencies will produce higher quality medical professionals.

  11. The Ca(2+)-EDTA chelation as standard reaction to validate Isothermal Titration Calorimeter measurements (ITC).

    PubMed

    Ràfols, Clara; Bosch, Elisabeth; Barbas, Rafael; Prohens, Rafel

    2016-07-01

    A study about the suitability of the chelation reaction of Ca(2+)with ethylenediaminetetraacetic acid (EDTA) as a validation standard for Isothermal Titration Calorimeter measurements has been performed exploring the common experimental variables (buffer, pH, ionic strength and temperature). Results obtained in a variety of experimental conditions have been amended according to the side reactions involved in the main process and to the experimental ionic strength and, finally, validated by contrast with the potentiometric reference values. It is demonstrated that the chelation reaction performed in acetate buffer 0.1M and 25°C shows accurate and precise results and it is robust enough to be adopted as a standard calibration process. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Results of in vivo measurements of strontium-90 body-burden in Urals residents: analyses of data obtained 2006-2012

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tolstykh, E. I.; Bougrov, N. G.; Krivoshchapov, Victor A.

    2012-06-01

    A part of the Urals territory was contaminated with 90Sr and 137Cs in the 1950s as a result of accidents at the "Mayak" Production Association. The paper describes the analysis of in vivo 90Sr measurements in Urals residents. The measurements were performed with the use of whole-body-counter SICH-9.1M in 2006-2012. Totally 5840 measurements for 4876 persons were performed from 2006 to 2012; maximal measured value was 24 kBq. Earlier, similar measurements were performed with SICH-9.1 (1974-1997). Comparison of the results obtained with SICH-9.1 and SICH-9.1M has shown a good agreement of the two data sets.

  13. Validity of body composition methods across ethnic population groups.

    PubMed

    Deurenberg, P; Deurenberg-Yap, M

    2003-10-01

    Most in vivo body composition methods rely on assumptions that may vary among different population groups as well as within the same population group. The assumptions are based on in vitro body composition (carcass) analyses. The majority of body composition studies were performed on Caucasians and much of the information on validity methods and assumptions were available only for this ethnic group. It is assumed that these assumptions are also valid for other ethnic groups. However, if apparent differences across ethnic groups in body composition 'constants' and body composition 'rules' are not taken into account, biased information on body composition will be the result. This in turn may lead to misclassification of obesity or underweight at an individual as well as a population level. There is a need for more cross-ethnic population studies on body composition. Those studies should be carried out carefully, with adequate methodology and standardization for the obtained information to be valuable.

  14. Validation of the sex estimation method elaborated by Schutkowski in the Granada Osteological Collection of identified infant and young children: Analysis of the controversy between the different ways of analyzing and interpreting the results.

    PubMed

    Irurita Olivares, Javier; Alemán Aguilera, Inmaculada

    2016-11-01

    Sex estimation of juveniles in the Physical and Forensic Anthropology context is currently a task with serious difficulties because the discriminatory bone characteristics are minimal until puberty. Also, the small number of osteological collections of children available for research has made it difficult to develop effective methodologies in this regard. This study tested the characteristics of the ilium and jaw proposed by Schutkowski in 1993 for estimation of sex in subadults. The study sample consisted of 109 boys and 76 girls, ranging in age from 5 months of gestation to 6 years, from the identified osteological collection of Granada (Spain). For the analysis and interpretation of the results, we have proposed changes from previous studies because we believe they raised methodological errors relating to the calculation of probabilities of success and sex distribution in the sample. The results showed correct assignment probabilities much lower than those obtained by Schutkowski as well as by other authors. The best results were obtained with the angle and depth of the sciatic notch, with 0.73 and 0.80 probability of correct assignment respectively if the male trait was observed. The results obtained with the other criteria were too small to be valid in the context of Physical or Forensic Anthropology. From our results, we concluded that Schutkowski method should not be used in forensic context, and that the sciatic notch is the most dimorphic trait in subadults and, therefore, the most appropriate to develop more effective methods for estimating sex.

  15. Are validated outcome measures used in distal radial fractures truly valid?

    PubMed Central

    Nienhuis, R. W.; Bhandari, M.; Goslings, J. C.; Poolman, R. W.; Scholtes, V. A. B.

    2016-01-01

    Objectives Patient-reported outcome measures (PROMs) are often used to evaluate the outcome of treatment in patients with distal radial fractures. Which PROM to select is often based on assessment of measurement properties, such as validity and reliability. Measurement properties are assessed in clinimetric studies, and results are often reviewed without considering the methodological quality of these studies. Our aim was to systematically review the methodological quality of clinimetric studies that evaluated measurement properties of PROMs used in patients with distal radial fractures, and to make recommendations for the selection of PROMs based on the level of evidence of each individual measurement property. Methods A systematic literature search was performed in PubMed, EMbase, CINAHL and PsycINFO databases to identify relevant clinimetric studies. Two reviewers independently assessed the methodological quality of the studies on measurement properties, using the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) checklist. Level of evidence (strong / moderate / limited / lacking) for each measurement property per PROM was determined by combining the methodological quality and the results of the different clinimetric studies. Results In all, 19 out of 1508 identified unique studies were included, in which 12 PROMs were rated. The Patient-rated wrist evaluation (PRWE) and the Disabilities of Arm, Shoulder and Hand questionnaire (DASH) were evaluated on most measurement properties. The evidence for the PRWE is moderate that its reliability, validity (content and hypothesis testing), and responsiveness are good. The evidence is limited that its internal consistency and cross-cultural validity are good, and its measurement error is acceptable. There is no evidence for its structural and criterion validity. The evidence for the DASH is moderate that its responsiveness is good. The evidence is limited that its reliability and the

  16. Cross-Cultural Adaptation and Validation of the Italian Version of SWAL-QOL.

    PubMed

    Ginocchio, Daniela; Alfonsi, Enrico; Mozzanica, Francesco; Accornero, Anna Rosa; Bergonzoni, Antonella; Chiarello, Giulia; De Luca, Nicoletta; Farneti, Daniele; Marilia, Simonelli; Calcagno, Paola; Turroni, Valentina; Schindler, Antonio

    2016-10-01

    The aim of the study was to evaluate the reliability and validity of the Italian SWAL-QOL (I-SWAL-QOL). The study consisted of five phases: item generation, reliability analysis, normative data generation, validity analysis, and responsiveness analysis. The item generation phase followed the five-step, cross-cultural, adaptation process of translation and back-translation. A group of 92 dysphagic patients was enrolled for the internal consistency analysis. Seventy-eight patients completed the I-SWAL-QOL twice, 2 weeks apart, for test-retest reliability analysis. A group of 200 asymptomatic subjects completed the I-SWAL-QOL for normative data generation. I-SWAL-QOL scores obtained by both the group of dysphagic subjects and asymptomatic ones were compared for validity analysis. I-SWAL-QOL scores were correlated with SF-36 scores in 67 patients with dysphagia for concurrent validity analysis. Finally, I-SWAL-QOL scores obtained in a group of 30 dysphagic patients before and after successful rehabilitation treatment were compared for responsiveness analysis. All the enrolled patients managed to complete the I-SWAL-QOL without needing any assistance, within 20 min. Internal consistency was acceptable for all I-SWAL-QOL subscales (α > 0.70). Test-retest reliability was also satisfactory for all subscales (ICC > 0.7). A significant difference between the dysphagic group and the control group was found in all I-SWAL-QOL subscales (p < 0.05). Mild to moderate correlations between I-SWAL-QOL and SF-36 subscales were observed. I-SWAL-QOL scores obtained in the pre-treatment condition were significantly lower than those obtained after swallowing rehabilitation. I-SWAL-QOL is reliable, valid, responsive to changes in QOL, and recommended for clinical practice and outcome research.

  17. Evaluation of random errors in Williams’ series coefficients obtained with digital image correlation

    NASA Astrophysics Data System (ADS)

    Lychak, Oleh V.; Holyns'kiy, Ivan S.

    2016-03-01

    The use of the Williams’ series parameters for fracture analysis requires valid information about their error values. The aim of this investigation is the development of the method for estimation of the standard deviation of random errors of the Williams’ series parameters, obtained from the measured components of the stress field. Also, the criteria for choosing the optimal number of terms in the truncated Williams’ series for derivation of their parameters with minimal errors is proposed. The method was used for the evaluation of the Williams’ parameters, obtained from the data, and measured by the digital image correlation technique for testing a three-point bending specimen.

  18. RELIABILITY AND VALIDITY OF SUBJECTIVE ASSESSMENT OF LUMBAR LORDOSIS IN CONVENTIONAL RADIOGRAPHY.

    PubMed

    Ruhinda, E; Byanyima, R K; Mugerwa, H

    2014-10-01

    Reliability and validity studies of different lumbar curvature analysis and measurement techniques have been documented however there is limited literature on the reliability and validity of subjective visual analysis. Radiological assessment of lumbar lordotic curve aids in early diagnosis of conditions even before neurologic changes set in. To ascertain the level of reliability and validity of subjective assessment of lumbar lordosis in conventional radiography. A blinded, repeated-measures diagnostic test was carried out on lumbar spine x-ray radiographs. Radiology Department at Joint Clinical Research Centre (JCRC), Mengo-Kampala-Uganda. Seventy (70) lateral lumbar x-ray films were used for this study and were obtained from the archive of JCRC radiology department at Butikiro house, Mengo-Kampala. Poor observer agreement, both inter- and intra-observer, with kappa values of 0.16 was found. Inter-observer agreement was poorer than intra-observer agreement. Kappa values significantly rose when the lumbar lordosis was clustered into four categories without grading each abnormality. The results confirm that subjective assessment of lumbar lordosis has low reliability and validity. Film quality has limited influence on the observer reliability. This study further shows that fewer scale categories of lordosis abnormalities produce better observer reliability.

  19. Semi-physiologic model validation and bioequivalence trials simulation to select the best analyte for acetylsalicylic acid.

    PubMed

    Cuesta-Gragera, Ana; Navarro-Fontestad, Carmen; Mangas-Sanjuan, Victor; González-Álvarez, Isabel; García-Arieta, Alfredo; Trocóniz, Iñaki F; Casabó, Vicente G; Bermejo, Marival

    2015-07-10

    The objective of this paper is to apply a previously developed semi-physiologic pharmacokinetic model implemented in NONMEM to simulate bioequivalence trials (BE) of acetyl salicylic acid (ASA) in order to validate the model performance against ASA human experimental data. ASA is a drug with first-pass hepatic and intestinal metabolism following Michaelis-Menten kinetics that leads to the formation of two main metabolites in two generations (first and second generation metabolites). The first aim was to adapt the semi-physiological model for ASA in NOMMEN using ASA pharmacokinetic parameters from literature, showing its sequential metabolism. The second aim was to validate this model by comparing the results obtained in NONMEM simulations with published experimental data at a dose of 1000 mg. The validated model was used to simulate bioequivalence trials at 3 dose schemes (100, 1000 and 3000 mg) and with 6 test formulations with decreasing in vivo dissolution rate constants versus the reference formulation (kD 8-0.25 h (-1)). Finally, the third aim was to determine which analyte (parent drug, first generation or second generation metabolite) was more sensitive to changes in formulation performance. The validation results showed that the concentration-time curves obtained with the simulations reproduced closely the published experimental data, confirming model performance. The parent drug (ASA) was the analyte that showed to be more sensitive to the decrease in pharmaceutical quality, with the highest decrease in Cmax and AUC ratio between test and reference formulations. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Airborne Validation of Spatial Properties Measured by the CALIPSO Lidar

    NASA Technical Reports Server (NTRS)

    McGill, Matthew J.; Vaughan, Mark A.; Trepte, Charles Reginald; Hart, William D.; Hlavka, Dennis L.; Winker, David M.; Keuhn, Ralph

    2007-01-01

    The primary payload onboard the Cloud-Aerosol Lidar Infrared Pathfinder Satellite Observations (CALIPSO) satellite is a dual-wavelength backscatter lidar designed to provide vertical profiling of clouds and aerosols. Launched in April 2006, the first data from this new satellite was obtained in June 2006. As with any new satellite measurement capability, an immediate post-launch requirement is to verify that the data being acquired is correct lest scientific conclusions begin to be drawn based on flawed data. A standard approach to verifying satellite data is to take a similar, or validation, instrument and fly it onboard a research aircraft. Using an aircraft allows the validation instrument to get directly under the satellite so that both the satellite instrument and the aircraft instrument are sensing the same region of the atmosphere. Although there are almost always some differences in the sampling capabilities of the two instruments, it is nevertheless possible to directly compare the measurements. To validate the measurements from the CALIPSO lidar, a similar instrument, the Cloud Physics Lidar, was flown onboard the NASA high-altitude ER-2 aircraft during July- August 2006. This paper presents results to demonstrate that the CALIPSO lidar is properly calibrated and the CALIPSO Level 1 data products are correct. The importance of the results is to demonstrate to the research community that CALIPSO Level 1 data can be confidently used for scientific research.

  1. Principles for valid histopathologic scoring in research

    PubMed Central

    Gibson-Corley, Katherine N.; Olivier, Alicia K.; Meyerholz, David K.

    2013-01-01

    Histopathologic scoring is a tool by which semi-quantitative data can be obtained from tissues. Initially, a thorough understanding of the experimental design, study objectives and methods are required to allow the pathologist to appropriately examine tissues and develop lesion scoring approaches. Many principles go into the development of a scoring system such as tissue examination, lesion identification, scoring definitions and consistency in interpretation. Masking (a.k.a. “blinding”) of the pathologist to experimental groups is often necessary to constrain bias and multiple mechanisms are available. Development of a tissue scoring system requires appreciation of the attributes and limitations of the data (e.g. nominal, ordinal, interval and ratio data) to be evaluated. Incidence, ordinal and rank methods of tissue scoring are demonstrated along with key principles for statistical analyses and reporting. Validation of a scoring system occurs through two principal measures: 1) validation of repeatability and 2) validation of tissue pathobiology. Understanding key principles of tissue scoring can help in the development and/or optimization of scoring systems so as to consistently yield meaningful and valid scoring data. PMID:23558974

  2. Vivaldi: visualization and validation of biomacromolecular NMR structures from the PDB.

    PubMed

    Hendrickx, Pieter M S; Gutmanas, Aleksandras; Kleywegt, Gerard J

    2013-04-01

    We describe Vivaldi (VIsualization and VALidation DIsplay; http://pdbe.org/vivaldi), a web-based service for the analysis, visualization, and validation of NMR structures in the Protein Data Bank (PDB). Vivaldi provides access to model coordinates and several types of experimental NMR data using interactive visualization tools, augmented with structural annotations and model-validation information. The service presents information about the modeled NMR ensemble, validation of experimental chemical shifts, residual dipolar couplings, distance and dihedral angle constraints, as well as validation scores based on empirical knowledge and databases. Vivaldi was designed for both expert NMR spectroscopists and casual non-expert users who wish to obtain a better grasp of the information content and quality of NMR structures in the public archive. Copyright © 2013 Wiley Periodicals, Inc.

  3. URANS simulations of the tip-leakage cavitating flow with verification and validation procedures

    NASA Astrophysics Data System (ADS)

    Cheng, Huai-yu; Long, Xin-ping; Liang, Yun-zhi; Long, Yun; Ji, Bin

    2018-04-01

    In the present paper, the Vortex Identified Zwart-Gerber-Belamri (VIZGB) cavitation model coupled with the SST-CC turbulence model is used to investigate the unsteady tip-leakage cavitating flow induced by a NACA0009 hydrofoil. A qualitative comparison between the numerical and experimental results is made. In order to quantitatively evaluate the reliability of the numerical data, the verification and validation (V&V) procedures are used in the present paper. Errors of numerical results are estimated with seven error estimators based on the Richardson extrapolation method. It is shown that though a strict validation cannot be achieved, a reasonable prediction of the gross characteristics of the tip-leakage cavitating flow can be obtained. Based on the numerical results, the influence of the cavitation on the tip-leakage vortex (TLV) is discussed, which indicates that the cavitation accelerates the fusion of the TLV and the tip-separation vortex (TSV). Moreover, the trajectory of the TLV, when the cavitation occurs, is close to the side wall.

  4. Stroke Impact Scale 3.0: Reliability and Validity Evaluation of the Korean Version.

    PubMed

    Choi, Seong Uk; Lee, Hye Sun; Shin, Joon Ho; Ho, Seung Hee; Koo, Mi Jung; Park, Kyoung Hae; Yoon, Jeong Ah; Kim, Dong Min; Oh, Jung Eun; Yu, Se Hwa; Kim, Dong A

    2017-06-01

    To establish the reliability and validity the Korean version of the Stroke Impact Scale (K-SIS) 3.0. A total of 70 post-stroke patients were enrolled. All subjects were evaluated for general characteristics, Mini-Mental State Examination (MMSE), the National Institutes of Health Stroke Scale (NIHSS), Modified Barthel Index, Hospital Anxiety and Depression Scale (HADS). The SF-36 and K-SIS 3.0 assessed their health-related quality of life. Statistical analysis after evaluation, determined the reliability and validity of the K-SIS 3.0. A total of 70 patients (mean age, 54.97 years) participated in this study. Internal consistency of the SIS 3.0 (Cronbach's alpha) was obtained, and all domains had good co-efficiency, with threshold above 0.70. Test-retest reliability of SIS 3.0 required correlation (Spearman's rho) of the same domain scores obtained on the first and second assessments. Results were above 0.5, with the exception of social participation and mobility. Concurrent validity of K-SIS 3.0 was assessed using the SF-36, and other scales with the same or similar domains. Each domain of K-SIS 3.0 had a positive correlation with corresponding similar domain of SF-36 and other scales (HADS, MMSE, and NIHSS). The newly developed K-SIS 3.0 showed high inter-intra reliability and test-retest reliabilities, together with high concurrent validity with the original and various other scales, for patients with stroke. K-SIS 3.0 can therefore be used for stroke patients, to assess their health-related quality of life and treatment efficacy.

  5. DEVELOPMENT OF MOTIVATION SCALE - CLINICAL VALIDATION WITH ALCOHOL DEPENDENTS

    PubMed Central

    Neeliyara, Teresa; Nagalakshmi, S.V.

    1994-01-01

    This study focusses on the development of a comprehensive multi-dimensional scale for assessing motivation for change in the alcohol dependent population. After establishing face validity, the items evolved were administered to a normal sample of 600 male subjects in whom psychiatric illness was ruled out. The data thus obtained was subjected to factor analysis. Six factors were obtained which accounted for 55.2% of variance. These together formed a 80 item five point scale and norms were established on a sample of 600 normal subjects. Further clinical validation was established on 30 alcohol dependent subjects and 30 normals. The status of motivation was found to be inadequate in alcohol dependent individuals as compared to the normals. Split-half reliability was carried out and the tool was found to be highly reliable. PMID:21743674

  6. Validation of the Policy Advocacy Engagement Scale for frontline healthcare professionals.

    PubMed

    Jansson, Bruce S; Nyamathi, Adeline; Heidemann, Gretchen; Duan, Lei; Kaplan, Charles

    2017-05-01

    Nurses, social workers, and medical residents are ethically mandated to engage in policy advocacy to promote the health and well-being of patients and increase access to care. Yet, no instrument exists to measure their level of engagement in policy advocacy. To describe the development and validation of the Policy Advocacy Engagement Scale, designed to measure frontline healthcare professionals' engagement in policy advocacy with respect to a broad range of issues, including patients' ethical rights, quality of care, culturally competent care, preventive care, affordability/accessibility of care, mental healthcare, and community-based care. Cross-sectional data were gathered to estimate the content and construct validity, internal consistency, and test-retest reliability of the Policy Advocacy Engagement Scale. Participants and context: In all, 97 nurses, 94 social workers, and 104 medical residents (N = 295) were recruited from eight acute-care hospitals in Los Angeles County. Ethical considerations: Informed consent was obtained via Qualtrics and covered purposes, risks and benefits; voluntary participation; confidentiality; and compensation. Institutional Review Board approval was obtained from the University of Southern California and all hospitals. Results supported the validity of the concept and the instrument. In confirmatory factor analysis, seven items loaded onto one component with indices indicating adequate model fit. A Pearson correlation coefficient of .36 supported the scale's test-retest stability. Cronbach's α of .93 indicated strong internal consistency. The Policy Advocacy Engagement Scale demonstrated satisfactory psychometric properties in this initial test. Findings should be considered within the context of the study's limitations, which include a low response rate and limited geographic scope. The Policy Advocacy Engagement Scale appears to be the first validated scale to measure frontline healthcare professionals' engagement in policy

  7. Summary: Experimental validation of real-time fault-tolerant systems

    NASA Technical Reports Server (NTRS)

    Iyer, R. K.; Choi, G. S.

    1992-01-01

    Testing and validation of real-time systems is always difficult to perform since neither the error generation process nor the fault propagation problem is easy to comprehend. There is no better substitute to results based on actual measurements and experimentation. Such results are essential for developing a rational basis for evaluation and validation of real-time systems. However, with physical experimentation, controllability and observability are limited to external instrumentation that can be hooked-up to the system under test. And this process is quite a difficult, if not impossible, task for a complex system. Also, to set up such experiments for measurements, physical hardware must exist. On the other hand, a simulation approach allows flexibility that is unequaled by any other existing method for system evaluation. A simulation methodology for system evaluation was successfully developed and implemented and the environment was demonstrated using existing real-time avionic systems. The research was oriented toward evaluating the impact of permanent and transient faults in aircraft control computers. Results were obtained for the Bendix BDX 930 system and Hamilton Standard EEC131 jet engine controller. The studies showed that simulated fault injection is valuable, in the design stage, to evaluate the susceptibility of computing sytems to different types of failures.

  8. Validation of OVERFLOW for Supersonic Retropropulsion

    NASA Technical Reports Server (NTRS)

    Schauerhamer, Guy

    2012-01-01

    The goal is to softly land high mass vehicles (10s of metric tons) on Mars. Supersonic Retropropulsion (SRP) is a potential method of deceleration. Current method of supersonic parachutes does not scale past 1 metric ton. CFD is of increasing importance since flight and experimental data at these conditions is difficult to obtain. CFD must first be validated at these conditions.

  9. Enhancement and Validation of an Arab Surname Database

    PubMed Central

    Schwartz, Kendra; Beebani, Ganj; Sedki, Mai; Tahhan, Mamon; Ruterbusch, Julie J.

    2015-01-01

    Objectives Arab Americans constitute a large, heterogeneous, and quickly growing subpopulation in the United States. Health statistics for this group are difficult to find because US governmental offices do not recognize Arab as separate from white. The development and validation of an Arab- and Chaldean-American name database will enhance research efforts in this population subgroup. Methods A previously validated name database was supplemented with newly identified names gathered primarily from vital statistic records and then evaluated using a multistep process. This process included 1) review by 4 Arabic- and Chaldean-speaking reviewers, 2) ethnicity assessment by social media searches, and 3) self-report of ancestry obtained from a telephone survey. Results Our Arab- and Chaldean-American name algorithm has a positive predictive value of 91% and a negative predictive value of 100%. Conclusions This enhanced name database and algorithm can be used to identify Arab Americans in health statistics data, such as cancer and hospital registries, where they are often coded as white, to determine the extent of health disparities in this population. PMID:24625771

  10. Structured Uncertainty Bound Determination From Data for Control and Performance Validation

    NASA Technical Reports Server (NTRS)

    Lim, Kyong B.

    2003-01-01

    This report attempts to document the broad scope of issues that must be satisfactorily resolved before one can expect to methodically obtain, with a reasonable confidence, a near-optimal robust closed loop performance in physical applications. These include elements of signal processing, noise identification, system identification, model validation, and uncertainty modeling. Based on a recently developed methodology involving a parameterization of all model validating uncertainty sets for a given linear fractional transformation (LFT) structure and noise allowance, a new software, Uncertainty Bound Identification (UBID) toolbox, which conveniently executes model validation tests and determine uncertainty bounds from data, has been designed and is currently available. This toolbox also serves to benchmark the current state-of-the-art in uncertainty bound determination and in turn facilitate benchmarking of robust control technology. To help clarify the methodology and use of the new software, two tutorial examples are provided. The first involves the uncertainty characterization of a flexible structure dynamics, and the second example involves a closed loop performance validation of a ducted fan based on an uncertainty bound from data. These examples, along with other simulation and experimental results, also help describe the many factors and assumptions that determine the degree of success in applying robust control theory to practical problems.

  11. Double-Pulsed 2-Micrometer Lidar Validation for Atmospheric CO2 Measurements

    NASA Technical Reports Server (NTRS)

    Singh, Upendra N.; Refaat, Tamer F.; Yu, Jirong; Petros, Mulugeta; Remus, Ruben

    2015-01-01

    A double-pulsed, 2-micron Integrated Path Differential Absorption (IPDA) lidar instrument for atmospheric carbon dioxide (CO2) measurements is successfully developed at NASA Langley Research Center (LaRC). Based on direct detection technique, the instrument can be operated on ground or onboard a small aircraft. Key features of this compact, rugged and reliable IPDA lidar includes high transmitted laser energy, wavelength tuning, switching and locking, and sensitive detection. As a proof of concept, the IPDA ground and airborne CO2 measurement and validation will be presented. IPDA lidar CO2 measurements ground validation were conducted at NASA LaRC using hard targets and a calibrated in-situ sensor. Airborne validation, conducted onboard the NASA B-200 aircraft, included CO2 plum detection from power stations incinerators, comparison to in-flight CO2 in-situ sensor and comparison to air sampling at different altitude conducted by NOAA at the same site. Airborne measurements, spanning for 20 hours, were obtained from different target conditions. Ground targets included soil, vegetation, sand, snow and ocean. In addition, cloud slicing was examined over the ocean. These flight validations were conducted at different altitudes, up to 7 km, with different wavelength controlled weighing functions. CO2 measurement results agree with modeling conducted through the different sensors, as will be discussed.

  12. A Historical Forcing Ice Sheet Model Validation Framework for Greenland

    NASA Astrophysics Data System (ADS)

    Price, S. F.; Hoffman, M. J.; Howat, I. M.; Bonin, J. A.; Chambers, D. P.; Kalashnikova, I.; Neumann, T.; Nowicki, S.; Perego, M.; Salinger, A.

    2014-12-01

    We propose an ice sheet model testing and validation framework for Greenland for the years 2000 to the present. Following Perego et al. (2014), we start with a realistic ice sheet initial condition that is in quasi-equilibrium with climate forcing from the late 1990's. This initial condition is integrated forward in time while simultaneously applying (1) surface mass balance forcing (van Angelen et al., 2013) and (2) outlet glacier flux anomalies, defined using a new dataset of Greenland outlet glacier flux for the past decade (Enderlin et al., 2014). Modeled rates of mass and elevation change are compared directly to remote sensing observations obtained from GRACE and ICESat. Here, we present a detailed description of the proposed validation framework including the ice sheet model and model forcing approach, the model-to-observation comparison process, and initial results comparing model output and observations for the time period 2000-2013.

  13. Towards a full integration of optimization and validation phases: An analytical-quality-by-design approach.

    PubMed

    Hubert, C; Houari, S; Rozet, E; Lebrun, P; Hubert, Ph

    2015-05-22

    When using an analytical method, defining an analytical target profile (ATP) focused on quantitative performance represents a key input, and this will drive the method development process. In this context, two case studies were selected in order to demonstrate the potential of a quality-by-design (QbD) strategy when applied to two specific phases of the method lifecycle: the pre-validation study and the validation step. The first case study focused on the improvement of a liquid chromatography (LC) coupled to mass spectrometry (MS) stability-indicating method by the means of the QbD concept. The design of experiments (DoE) conducted during the optimization step (i.e. determination of the qualitative design space (DS)) was performed a posteriori. Additional experiments were performed in order to simultaneously conduct the pre-validation study to assist in defining the DoE to be conducted during the formal validation step. This predicted protocol was compared to the one used during the formal validation. A second case study based on the LC/MS-MS determination of glucosamine and galactosamine in human plasma was considered in order to illustrate an innovative strategy allowing the QbD methodology to be incorporated during the validation phase. An operational space, defined by the qualitative DS, was considered during the validation process rather than a specific set of working conditions as conventionally performed. Results of all the validation parameters conventionally studied were compared to those obtained with this innovative approach for glucosamine and galactosamine. Using this strategy, qualitative and quantitative information were obtained. Consequently, an analyst using this approach would be able to select with great confidence several working conditions within the operational space rather than a given condition for the routine use of the method. This innovative strategy combines both a learning process and a thorough assessment of the risk involved

  14. An exploratory sequential design to validate measures of moral emotions.

    PubMed

    Márquez, Margarita G; Delgado, Ana R

    2017-05-01

    This paper presents an exploratory and sequential mixed methods approach in validating measures of knowledge of the moral emotions of contempt, anger and disgust. The sample comprised 60 participants in the qualitative phase when a measurement instrument was designed. Item stems, response options and correction keys were planned following the results obtained in a descriptive phenomenological analysis of the interviews. In the quantitative phase, the scale was used with a sample of 102 Spanish participants, and the results were analysed with the Rasch model. In the qualitative phase, salient themes included reasons, objects and action tendencies. In the quantitative phase, good psychometric properties were obtained. The model fit was adequate. However, some changes had to be made to the scale in order to improve the proportion of variance explained. Substantive and methodological im-plications of this mixed-methods study are discussed. Had the study used a single re-search method in isolation, aspects of the global understanding of contempt, anger and disgust would have been lost.

  15. Prediction of thoracic dimensions and spine length on the basis of individual pelvic dimensions: validation of the use of pelvic inlet width obtained by radiographs compared with computed tomography.

    PubMed

    Gold, Meryl; Dombek, Michael; Miller, Patricia E; Emans, John B; Glotzbecker, Michael P

    2014-01-01

    Retrospective review. To validate the pelvic inlet width (PIW) measurement obtained on radiograph as an independent standard used to correlate with thoracic dimensions (TDs) in treated and untreated patients with early-onset scoliosis. In children with early-onset scoliosis, the change in TD and spine length is a key treatment goal. Quantifying this change is confounded by varied growth rates and differing diagnoses. PIW measured on computed tomographic (CT) scan in patients without scoliosis has been shown to correlate with TD in an age-independent manner. The first arm included 49 patients with scoliosis who had both a CT scan and pelvic radiograph. Agreement between PIW measurements on CT scan and radiograph was analyzed. The second arm consisted of 163 patients (age, 0.2-18.7 yr), with minimal spinal deformity (mean Cobb, 9.0°) and radiographs in which PIW was measurable. PIW was compared with previously published CT-based TD measurements; maximal chest width, T1-T12 height, and T1-S1 height. Linear regression analysis was used to develop and validate sex-specific predictive equations for each TD measurement on the basis of PIW. Interobserver reliability was evaluated for all measurements. Bland-Altman analysis indicated agreement with no dependence on observed value, but a consistent 8.5 mm (95% CI: 7.2-9.9 mm) difference in CT scan measurement compared with radiographical PIW measurement. Sex and PIW were significantly correlated to each TD measurement (P < 0.01). Predictive models were validated and may be used to estimate TD measurements on the basis of sex and radiographical PIW. Intraclass correlation coefficients for all measurements were between 0.978 and 0.997. PIW on radiographs and CT scan correlate in patients with deformity and with spine and TD in patients with minimal deformity. It is a fast, reliable method of assessing growth while lowering patient's radiation exposure. It can be reliably used to assess patients with early-onset scoliosis and

  16. Preliminary validation of the Spanish version of the Multiple Stimulus Types Ambiguity Tolerance Scale (MSTAT-II).

    PubMed

    Arquero, José L; McLain, David L

    2010-05-01

    Despite widespread interest in ambiguity tolerance and other information-related individual differences, existing measures are conceptually dispersed and psychometrically weak. This paper presents the Spanish version of MSTAT-II, a short, stimulus-oriented, and psychometrically improved measure of an individual's orientation toward ambiguous stimuli. Results obtained reveal adequate reliability, validity, and temporal stability. These results support the use of MSTAT-II as an adequate measure of ambiguity tolerance.

  17. Validating archetypes for the Multiple Sclerosis Functional Composite

    PubMed Central

    2014-01-01

    Background Numerous information models for electronic health records, such as openEHR archetypes are available. The quality of such clinical models is important to guarantee standardised semantics and to facilitate their interoperability. However, validation aspects are not regarded sufficiently yet. The objective of this report is to investigate the feasibility of archetype development and its community-based validation process, presuming that this review process is a practical way to ensure high-quality information models amending the formal reference model definitions. Methods A standard archetype development approach was applied on a case set of three clinical tests for multiple sclerosis assessment: After an analysis of the tests, the obtained data elements were organised and structured. The appropriate archetype class was selected and the data elements were implemented in an iterative refinement process. Clinical and information modelling experts validated the models in a structured review process. Results Four new archetypes were developed and publicly deployed in the openEHR Clinical Knowledge Manager, an online platform provided by the openEHR Foundation. Afterwards, these four archetypes were validated by domain experts in a team review. The review was a formalised process, organised in the Clinical Knowledge Manager. Both, development and review process turned out to be time-consuming tasks, mostly due to difficult selection processes between alternative modelling approaches. The archetype review was a straightforward team process with the goal to validate archetypes pragmatically. Conclusions The quality of medical information models is crucial to guarantee standardised semantic representation in order to improve interoperability. The validation process is a practical way to better harmonise models that diverge due to necessary flexibility left open by the underlying formal reference model definitions. This case study provides evidence that both

  18. The Development and Validation of the Game User Experience Satisfaction Scale (GUESS).

    PubMed

    Phan, Mikki H; Keebler, Joseph R; Chaparro, Barbara S

    2016-12-01

    The aim of this study was to develop and psychometrically validate a new instrument that comprehensively measures video game satisfaction based on key factors. Playtesting is often conducted in the video game industry to help game developers build better games by providing insight into the players' attitudes and preferences. However, quality feedback is difficult to obtain from playtesting sessions without a quality gaming assessment tool. There is a need for a psychometrically validated and comprehensive gaming scale that is appropriate for playtesting and game evaluation purposes. The process of developing and validating this new scale followed current best practices of scale development and validation. As a result, a mixed-method design that consisted of item pool generation, expert review, questionnaire pilot study, exploratory factor analysis (N = 629), and confirmatory factor analysis (N = 729) was implemented. A new instrument measuring video game satisfaction, called the Game User Experience Satisfaction Scale (GUESS), with nine subscales emerged. The GUESS was demonstrated to have content validity, internal consistency, and convergent and discriminant validity. The GUESS was developed and validated based on the assessments of over 450 unique video game titles across many popular genres. Thus, it can be applied across many types of video games in the industry both as a way to assess what aspects of a game contribute to user satisfaction and as a tool to aid in debriefing users on their gaming experience. The GUESS can be administered to evaluate user satisfaction of different types of video games by a variety of users. © 2016, Human Factors and Ergonomics Society.

  19. The individualized neuromuscular quality of life questionnaire: cultural translation and psychometric validation for the Dutch population.

    PubMed

    Seesing, Femke M; van Vught, Lisanne Ewm; Rose, Michael R; Drost, Gea; van Engelen, Baziel G M; van der Wilt, Gert-Jan

    2015-04-01

    In this study we describe the translation and psychometric evaluation of the Dutch Individualized Neuromuscular Quality of Life (INQoL) questionnaire. Backward and forward translation of the questionnaire was executed, and psychometric properties were assessed on the basis of reliability and validity. Two hundred six patients were included in the study. Reliability analyses resulted in Cronbach alpha values of >0.70 for all subdomains. Known-group validity showed a significant correlation between INQoL scores and severity as well as age for the majority of subdomains. Item-total correlation for overall quality of life was satisfactory. Concurrent validity with the SF-36 and EQ-5D was good (range of Spearman correlation coefficients -0.43 to -0.76). This study resulted in a questionnaire that is appropiate for use in the Dutch-speaking population to measure quality of life among patients with a wide variety of muscle disorders. This confirms and extends data obtained in the UK, US, Italy, and Serbia. © 2014 Wiley Periodicals, Inc.

  20. Validation of High Displacement Piezoelectric Actuator Finite Element Models

    NASA Technical Reports Server (NTRS)

    Taleghani, B. K.

    2000-01-01

    The paper presents the results obtained by using NASTRAN(Registered Trademark) and ANSYS(Regitered Trademark) finite element codes to predict doming of the THUNDER piezoelectric actuators during the manufacturing process and subsequent straining due to an applied input voltage. To effectively use such devices in engineering applications, modeling and characterization are essential. Length, width, dome height, and thickness are important parameters for users of such devices. Therefore, finite element models were used to assess the effects of these parameters. NASTRAN(Registered Trademark) and ANSYS(Registered Trademark) used different methods for modeling piezoelectric effects. In NASTRAN(Registered Trademark), a thermal analogy was used to represent voltage at nodes as equivalent temperatures, while ANSYS(Registered Trademark) processed the voltage directly using piezoelectric finite elements. The results of finite element models were validated by using the experimental results.

  1. Visual reproduction subtest of the Wechsler Memory Scale-Revised: analysis of construct validity.

    PubMed

    Williams, M A; Rich, M A; Reed, L K; Jackson, W T; LaMarche, J A; Boll, T J

    1998-11-01

    This study assessed the construct validity of Visual Reproduction (VR) Cards A (Flags) and B (Boxes) from the original Wechsler Memory Scale (WMS) compared to Flags and Boxes from the revised edition of the WMS (WMS-R). Independent raters scored Flags and Boxes using both the original and revised scoring criteria and correlations were obtained with age, education, IQ, and four separate criterion memory measures. Results show that for Flags, there is a tendency for the revised scoring criteria to produce improved construct validity. For Boxes, however, there was a trend in the opposite direction, with the revised scoring criteria demonstrating worse construct validity. Factor analysis suggests that Flags are a more distinct measure of visual memory, whereas Boxes are more complex and significantly associated with conceptual reasoning abilities. Using the revised scoring criteria, Boxes were found to be more strongly related to IQ than Flags. This difference was not found using the original scoring criteria.

  2. Validation of the Waterpipe Tolerance Questionnaire among Jordanian School-Going Adolescent Waterpipe Users

    PubMed Central

    Alzyoud, Sukaina; Veeranki, Sreenivas P.; Kheirallah, Khalid A.; Shotar, Ali M.; Pbert, Lori

    2016-01-01

    Introduction: Waterpipe use among adolescents has been increasing progressively. Yet no studies were reported to assess the validity and reliability of nicotine dependence scale. The current study aims to assess the validity and reliability of an Arabic version of the modified Waterpipe Tolerance Questionnaire WTQ among school-going adolescent waterpipe users. Methods: In a cross-sectional study conducted in Jordan, information on waterpipe use among 333 school-going adolescents aged 11-18 years was obtained using the Arabic version of the WTQ. An exploratory factor analysis and correlation matrices were conducted to assess validity and reliability of the WTQ. Results: The WTQ had a 0.73 alpha of internal consistency indicating moderate level of reliability. The scale showed multidimensionality with items loading on two factors, namely waterpipe consumption and morning smoking. Conclusion: This study report nicotine dependence level among school-going adolescents who identify themselves as waterpipe users using the WTQ. PMID:26383198

  3. 21 CFR 1271.230 - Process validation.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Process validation. 1271.230 Section 1271.230 Food..., AND CELLULAR AND TISSUE-BASED PRODUCTS Current Good Tissue Practice § 1271.230 Process validation. (a... validation activities and results must be documented, including the date and signature of the individual(s...

  4. 21 CFR 1271.230 - Process validation.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Process validation. 1271.230 Section 1271.230 Food..., AND CELLULAR AND TISSUE-BASED PRODUCTS Current Good Tissue Practice § 1271.230 Process validation. (a... validation activities and results must be documented, including the date and signature of the individual(s...

  5. 21 CFR 1271.230 - Process validation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Process validation. 1271.230 Section 1271.230 Food..., AND CELLULAR AND TISSUE-BASED PRODUCTS Current Good Tissue Practice § 1271.230 Process validation. (a... validation activities and results must be documented, including the date and signature of the individual(s...

  6. 21 CFR 1271.230 - Process validation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Process validation. 1271.230 Section 1271.230 Food..., AND CELLULAR AND TISSUE-BASED PRODUCTS Current Good Tissue Practice § 1271.230 Process validation. (a... validation activities and results must be documented, including the date and signature of the individual(s...

  7. 21 CFR 1271.230 - Process validation.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Process validation. 1271.230 Section 1271.230 Food..., AND CELLULAR AND TISSUE-BASED PRODUCTS Current Good Tissue Practice § 1271.230 Process validation. (a... validation activities and results must be documented, including the date and signature of the individual(s...

  8. Validity and reliability of the Persian version of mobile phone addiction scale

    PubMed Central

    Mazaheri, Maryam Amidi; Karbasi, Mojtaba

    2014-01-01

    Background: With regard to large number of mobile users especially among college students in Iran, addiction to mobile phone is attracting increasing concern. There is an urgent need for reliable and valid instrument to measure this phenomenon. This study examines validity and reliability of the Persian version of mobile phone addiction scale (MPAIS) in college students. Materials and Methods: this methodological study was down in Isfahan University of Medical Sciences. One thousand one hundred and eighty students were selected by convenience sampling. The English version of the MPAI questionnaire was translated into Persian with the approach of Jones et al. (Challenges in language, culture, and modality: Translating English measures into American Sign Language. Nurs Res 2006; 55: 75-81). Its reliability was tested by Cronbach's alpha and its dimensionality validity was evaluated using Pearson correlation coefficients with other measures of mobile phone use and IAT. Construct validity was evaluated using Exploratory subscale analysis. Results: Cronbach's alpha of 0.86 was obtained for total PMPAS, for subscale1 (eight items) was 0.84, for subscale 2 (five items) was 0.81 and for subscale 3 (two items) was 0.77. There were significantly positive correlations between the score of PMPAS and IAT (r = 0.453, P < 0.001) and other measures of mobile phone use. Principal component subscale analysis yielded a three-subscale structure including: inability to control craving; feeling anxious and lost; mood improvement accounted for 60.57% of total variance. The results of discriminate validity showed that all the item's correlations with related subscale were greater than 0.5 and correlations with unrelated subscale were less than 0.5. Conclusion: Considering lack of a valid and reliable questionnaire for measuring addiction to the mobile phone, PMPAS could be a suitable instrument for measuring mobile phone addiction in future research. PMID:24778668

  9. The Validity and Reliability of a Performance Assessment Procedure in Ice Hockey

    ERIC Educational Resources Information Center

    Nadeau, Luc; Richard, Jean-Francois; Godbout, Paul

    2008-01-01

    Background: Coaches and physical educators must obtain valid data relating to the contribution of each of their players in order to assess their level of performance in team sport competition. This information must also be collected and used in real game situations to be more valid. Developed initially for a physical education class context, the…

  10. Reliability and Validity of Quantifying Absolute Muscle Hardness Using Ultrasound Elastography

    PubMed Central

    Chino, Kentaro; Akagi, Ryota; Dohi, Michiko; Fukashiro, Senshi; Takahashi, Hideyuki

    2012-01-01

    Muscle hardness is a mechanical property that represents transverse muscle stiffness. A quantitative method that uses ultrasound elastography for quantifying absolute human muscle hardness has been previously devised; however, its reliability and validity have not been completely verified. This study aimed to verify the reliability and validity of this quantitative method. The Young’s moduli of seven tissue-mimicking materials (in vitro; Young’s modulus range, 20–80 kPa; increments of 10 kPa) and the human medial gastrocnemius muscle (in vivo) were quantified using ultrasound elastography. On the basis of the strain/Young’s modulus ratio of two reference materials, one hard and one soft (Young’s moduli of 7 and 30 kPa, respectively), the Young’s moduli of the tissue-mimicking materials and medial gastrocnemius muscle were calculated. The intra- and inter-investigator reliability of the method was confirmed on the basis of acceptably low coefficient of variations (≤6.9%) and substantially high intraclass correlation coefficients (≥0.77) obtained from all measurements. The correlation coefficient between the Young’s moduli of the tissue-mimicking materials obtained using a mechanical method and ultrasound elastography was 0.996, which was equivalent to values previously obtained using magnetic resonance elastography. The Young’s moduli of the medial gastrocnemius muscle obtained using ultrasound elastography were within the range of values previously obtained using magnetic resonance elastography. The reliability and validity of the quantitative method for measuring absolute muscle hardness using ultrasound elastography were thus verified. PMID:23029231

  11. Development and psychometric validation of a questionnaire to evaluate nurses' adherence to recommendations for preventing pressure ulcers (QARPPU).

    PubMed

    Moya-Suárez, Ana Belén; Morales-Asencio, José Miguel; Aranda-Gallardo, Marta; Enríquez de Luna-Rodríguez, Margarita; Canca-Sánchez, José Carlos

    2017-11-01

    The main objective of this work is the development and psychometric validation of an instrument to evaluate nurses' adherence to the main recommendations issued for preventing pressure ulcers. An instrument was designed based on the main recommendations for the prevention of pressure ulcers published in various clinical practice guides. Subsequently, it was proceeded to evaluate the face and content validity of the instrument by an expert group. It has been applied to 249 Spanish nurses took part in a cross-sectional study to obtain a psychometric evaluation (reliability and construct validity) of the instrument. The study data were compiled from June 2015 to July 2016. From the results of the psychometric analysis, a final 18-item, 4-factor questionnaire was derived, which explained 60.5% of the variance and presented the following optimal indices of fit (CMIN/DF: 1.40 p < 0.001; GFI: 0.93; NFI: 0.92; CFI: 0.98; TLI: 0.97; RMSEA: 0.04 (90% CI 0.025-0.054). The results obtained show that the instrument presents suitable psychometric properties for evaluating nurses' adherence to recommendations for the prevention of pressure ulcers. Copyright © 2017 Tissue Viability Society. Published by Elsevier Ltd. All rights reserved.

  12. Using Internet Search Engines to Obtain Medical Information: A Comparative Study

    PubMed Central

    Wang, Liupu; Wang, Juexin; Wang, Michael; Li, Yong; Liang, Yanchun

    2012-01-01

    Background The Internet has become one of the most important means to obtain health and medical information. It is often the first step in checking for basic information about a disease and its treatment. The search results are often useful to general users. Various search engines such as Google, Yahoo!, Bing, and Ask.com can play an important role in obtaining medical information for both medical professionals and lay people. However, the usability and effectiveness of various search engines for medical information have not been comprehensively compared and evaluated. Objective To compare major Internet search engines in their usability of obtaining medical and health information. Methods We applied usability testing as a software engineering technique and a standard industry practice to compare the four major search engines (Google, Yahoo!, Bing, and Ask.com) in obtaining health and medical information. For this purpose, we searched the keyword breast cancer in Google, Yahoo!, Bing, and Ask.com and saved the results of the top 200 links from each search engine. We combined nonredundant links from the four search engines and gave them to volunteer users in an alphabetical order. The volunteer users evaluated the websites and scored each website from 0 to 10 (lowest to highest) based on the usefulness of the content relevant to breast cancer. A medical expert identified six well-known websites related to breast cancer in advance as standards. We also used five keywords associated with breast cancer defined in the latest release of Systematized Nomenclature of Medicine-Clinical Terms (SNOMED CT) and analyzed their occurrence in the websites. Results Each search engine provided rich information related to breast cancer in the search results. All six standard websites were among the top 30 in search results of all four search engines. Google had the best search validity (in terms of whether a website could be opened), followed by Bing, Ask.com, and Yahoo!. The search

  13. Obtaining molecular and structural information from 13C-14N systems with 13C FIREMAT experiments.

    PubMed

    Strohmeier, Mark; Alderman, D W; Grant, David M

    2002-04-01

    The effect of dipolar coupling to 14N on 13C FIREMAT (five pi replicated magic angle turning) experiments is investigated. A method is developed for fitting the 13C FIREMAT FID employing the full theory to extract the 13C-14N dipolar and 13C chemical shift tensor information. The analysis requires prior knowledge of the electric field gradient (EFG) tensor at the 14N nucleus. In order to validate the method the analysis is done for the amino acids alpha-glycine, gamma-glycine, l-alanine, l-asparagine, and l-histidine on FIREMAT FIDs recorded at 13C frequencies of 50 and 100 MHz. The dipolar and chemical shift data obtained with this analysis are in very good agreement with the previous single-crystal 13C NMR results and neutron diffraction data on alpha-glycine, l-alanine, and l-asparagine. The values for gamma-glycine and l-histidine obtained with this new method are reported for the first time. The uncertainties in the EFG tensor on the resultant 13C chemical shift and dipolar tensor values are assessed. (c) 2002 Elsevier Science (USA).

  14. Validity of instruments to assess students' travel and pedestrian safety

    PubMed Central

    2010-01-01

    Background Safe Routes to School (SRTS) programs are designed to make walking and bicycling to school safe and accessible for children. Despite their growing popularity, few validated measures exist for assessing important outcomes such as type of student transport or pedestrian safety behaviors. This research validated the SRTS school travel survey and a pedestrian safety behavior checklist. Methods Fourth grade students completed a brief written survey on how they got to school that day with set responses. Test-retest reliability was obtained 3-4 hours apart. Convergent validity of the SRTS travel survey was assessed by comparison to parents' report. For the measure of pedestrian safety behavior, 10 research assistants observed 29 students at a school intersection for completion of 8 selected pedestrian safety behaviors. Reliability was determined in two ways: correlations between the research assistants' ratings to that of the Principal Investigator (PI) and intraclass correlations (ICC) across research assistant ratings. Results The SRTS travel survey had high test-retest reliability (κ = 0.97, n = 96, p < 0.001) and convergent validity (κ = 0.87, n = 81, p < 0.001). The pedestrian safety behavior checklist had moderate reliability across research assistants' ratings (ICC = 0.48) and moderate correlation with the PI (r = 0.55, p =< 0.01). When two raters simultaneously used the instrument, the ICC increased to 0.65. Overall percent agreement (91%), sensitivity (85%) and specificity (83%) were acceptable. Conclusions These validated instruments can be used to assess SRTS programs. The pedestrian safety behavior checklist may benefit from further formative work. PMID:20482778

  15. Validity and Reliability of the Italian Version of the Functioning Assessment Short Test (FAST) in Bipolar Disorder

    PubMed Central

    Moro, Maria Francesca; Colom, Francesc; Floris, Francesca; Pintus, Elisa; Pintus, Mirra; Contini, Francesca; Carta, Mauro Giovanni

    2012-01-01

    Background: Functioning Assessment Short Test (FAST) is a brief instrument designed to assess the main functioning problems experienced by psychiatric patients, specifically bipolar patients. It includes 24 items assessing impairment or disability in six domains of functioning: autonomy, occupational functioning, cognitive functioning, financial issues, interpersonal relationships and leisure time. The aim of this study is to measure the validity and reliability of the Italian version of this instrument. Methods: Twenty-four patients with DSM-IV TR bipolar disorder and 20 healthy controls were recruited and evaluated in three private clinics in Cagliari (Sardinia, Italy). The psychometric properties of FAST (feasibility, internal consistency, concurrent validity, discriminant validity (patients vs controls and eutimic patients vs manic and depressed), and test-retest reliability were analyzed. Results: The internal consistency obtained was very high with a Cronbach's alpha of 0.955. A highly significant negative correlation with GAF was obtained (r = -0.9; p < 0.001) pointing to a reasonable degree of concurrent validity. FAST show a good test-retest reliability between two independent evaluation differing of one week (mean K =0.73). The total FAST scores were lower in controls as compared with Bipolar Patients and in Euthimic patients compared with Depressed or Manic. Conclusion: The Italian version of the FAST showed similar psychometrics properties as far as regard internal consistency and discriminant validity of the original version and show a good test retest reliability measure by means of K statistics. PMID:22905035

  16. Validation of the Hungarian version of Carlson's Work-Family Conflict Scale.

    PubMed

    Ádám, Szilvia; Konkoly Thege, Barna

    2017-11-30

    Work-family conflict has been associated with adverse individual (e.g., cardiovascular diseases, anxiety disorders), organizational (e.g., absenteeism, lower productivity), and societal outcomes (e.g., increased use of healthcare services). However, lack of standardized measurement has hindered the comparison of data across various cultures. The purpose of this study was to develop the Hungarian version of Carlson et al.'s multidimensional Work-Family Conflict Scale and establish its reliability and validity. In a sample of 557 employees (145 men and 412 women), we conducted confirmatory factor analysis to investigate the factor structure and factorial invariance of the instrument across sex and data collection points and evaluated the tool's validity by assessing relationships between its dimensions and scales measuring general, marital, and job-related stress, depressive symptomatology, vital exhaustion, functional somatic symptoms, and social support. Our results showed that a six-factor model, similarly to that of the original instrument, fit the data best. Internal consistency of the six dimensions and the whole instrument was adequate. Convergent and divergent validity of the instrument and discriminant validity of the dimensions were also supported by our data. This study provides empirical support for the validity and reliability of the Hungarian version of the multidimensional Work-Family Conflict Scale. Deployment of this measure may allow for the generation of data that can be compared to those obtained in different cultural settings with the same instrument and hence advance our understanding of cross-cultural aspects of work-family conflict.

  17. The Validity of Quasi-Steady-State Approximations in Discrete Stochastic Simulations

    PubMed Central

    Kim, Jae Kyoung; Josić, Krešimir; Bennett, Matthew R.

    2014-01-01

    In biochemical networks, reactions often occur on disparate timescales and can be characterized as either fast or slow. The quasi-steady-state approximation (QSSA) utilizes timescale separation to project models of biochemical networks onto lower-dimensional slow manifolds. As a result, fast elementary reactions are not modeled explicitly, and their effect is captured by nonelementary reaction-rate functions (e.g., Hill functions). The accuracy of the QSSA applied to deterministic systems depends on how well timescales are separated. Recently, it has been proposed to use the nonelementary rate functions obtained via the deterministic QSSA to define propensity functions in stochastic simulations of biochemical networks. In this approach, termed the stochastic QSSA, fast reactions that are part of nonelementary reactions are not simulated, greatly reducing computation time. However, it is unclear when the stochastic QSSA provides an accurate approximation of the original stochastic simulation. We show that, unlike the deterministic QSSA, the validity of the stochastic QSSA does not follow from timescale separation alone, but also depends on the sensitivity of the nonelementary reaction rate functions to changes in the slow species. The stochastic QSSA becomes more accurate when this sensitivity is small. Different types of QSSAs result in nonelementary functions with different sensitivities, and the total QSSA results in less sensitive functions than the standard or the prefactor QSSA. We prove that, as a result, the stochastic QSSA becomes more accurate when nonelementary reaction functions are obtained using the total QSSA. Our work provides an apparently novel condition for the validity of the QSSA in stochastic simulations of biochemical reaction networks with disparate timescales. PMID:25099817

  18. Design for validation: An approach to systems validation

    NASA Technical Reports Server (NTRS)

    Carter, William C.; Dunham, Janet R.; Laprie, Jean-Claude; Williams, Thomas; Howden, William; Smith, Brian; Lewis, Carl M. (Editor)

    1989-01-01

    Every complex system built is validated in some manner. Computer validation begins with review of the system design. As systems became too complicated for one person to review, validation began to rely on the application of adhoc methods by many individuals. As the cost of the changes mounted and the expense of failure increased, more organized procedures became essential. Attempts at devising and carrying out those procedures showed that validation is indeed a difficult technical problem. The successful transformation of the validation process into a systematic series of formally sound, integrated steps is necessary if the liability inherent in the future digita-system-based avionic and space systems is to be minimized. A suggested framework and timetable for the transformtion are presented. Basic working definitions of two pivotal ideas (validation and system life-cyle) are provided and show how the two concepts interact. Many examples are given of past and present validation activities by NASA and others. A conceptual framework is presented for the validation process. Finally, important areas are listed for ongoing development of the validation process at NASA Langley Research Center.

  19. Validation of stationary phases in (111)In-pentetreotide planar chromatography.

    PubMed

    Moreno-Ortega, E; Mena-Bares, L M; Maza-Muret, F R; Hidalgo-Ramos, F J; Vallejo-Casas, J A

    2013-01-01

    Since Pall-German stopped manufacturing ITLC-SG, it has become necessary to validate alternative stationary phases. To validate different stationary phases versus ITLC-SG Pall-Gelman in the determination of the radiochemical purity (RCP) of (111)In-pentetreotide ((111)In-Octreoscan) by planar chromatography. We conducted a case-control study, which included 66 (111)In-pentetreotide preparations. We determined the RCP by planar chromatography, using a freshly prepared solution of 0,1M sodium citrate (pH 5) and the following stationary phases: ITLC-SG (Pall-Gelman) (reference method), iTLC-SG (Varian), HPTLC silica gel 60 (Merck), Whatman 1, Whatman 3MM and Whatman 17. For each of the methods, we calculated: PRQ, relative front values (RF) of the radiopharmaceutical and free (111)In, chromatographic development time, resolution between peaks. We compared the results obtained with the reference method. The statistical analysis was performed using the SPSS program. The p value was calculated for the study of statistical significance. The highest resolution is obtained with HPTLC silica gel 60 (Merck). However, the chromatographic development time is too long (mean=33.62minutes). Greater resolution is obtained with iTLC-SG (Varian) than with the reference method, with lower chromatographic development time (mean=3.61minutes). Very low resolutions are obtained with Whatman paper, essentially with Whatman 1 and 3MM. Therefore, we do not recommend their use. Although iTLC-SG (Varian) and HPTLC silica gel 60 (Merck) are suitable alternatives to ITLC-SG (Pall-Gelman) in determining the RCP of (111)In-pentetreotide, iTLC-SG (Varian) is the method of choice due to its lower chromatographic development time. Copyright © 2012 Elsevier España, S.L. and SEMNIM. All rights reserved.

  20. Experimental validation of solid rocket motor damping models

    NASA Astrophysics Data System (ADS)

    Riso, Cristina; Fransen, Sebastiaan; Mastroddi, Franco; Coppotelli, Giuliano; Trequattrini, Francesco; De Vivo, Alessio

    2017-12-01

    In design and certification of spacecraft, payload/launcher coupled load analyses are performed to simulate the satellite dynamic environment. To obtain accurate predictions, the system damping properties must be properly taken into account in the finite element model used for coupled load analysis. This is typically done using a structural damping characterization in the frequency domain, which is not applicable in the time domain. Therefore, the structural damping matrix of the system must be converted into an equivalent viscous damping matrix when a transient coupled load analysis is performed. This paper focuses on the validation of equivalent viscous damping methods for dynamically condensed finite element models via correlation with experimental data for a realistic structure representative of a slender launch vehicle with solid rocket motors. A second scope of the paper is to investigate how to conveniently choose a single combination of Young's modulus and structural damping coefficient—complex Young's modulus—to approximate the viscoelastic behavior of a solid propellant material in the frequency band of interest for coupled load analysis. A scaled-down test article inspired to the Z9-ignition Vega launcher configuration is designed, manufactured, and experimentally tested to obtain data for validation of the equivalent viscous damping methods. The Z9-like component of the test article is filled with a viscoelastic material representative of the Z9 solid propellant that is also preliminarily tested to investigate the dependency of the complex Young's modulus on the excitation frequency and provide data for the test article finite element model. Experimental results from seismic and shock tests performed on the test configuration are correlated with numerical results from frequency and time domain analyses carried out on its dynamically condensed finite element model to assess the applicability of different equivalent viscous damping methods to describe

  1. Experimental validation of solid rocket motor damping models

    NASA Astrophysics Data System (ADS)

    Riso, Cristina; Fransen, Sebastiaan; Mastroddi, Franco; Coppotelli, Giuliano; Trequattrini, Francesco; De Vivo, Alessio

    2018-06-01

    In design and certification of spacecraft, payload/launcher coupled load analyses are performed to simulate the satellite dynamic environment. To obtain accurate predictions, the system damping properties must be properly taken into account in the finite element model used for coupled load analysis. This is typically done using a structural damping characterization in the frequency domain, which is not applicable in the time domain. Therefore, the structural damping matrix of the system must be converted into an equivalent viscous damping matrix when a transient coupled load analysis is performed. This paper focuses on the validation of equivalent viscous damping methods for dynamically condensed finite element models via correlation with experimental data for a realistic structure representative of a slender launch vehicle with solid rocket motors. A second scope of the paper is to investigate how to conveniently choose a single combination of Young's modulus and structural damping coefficient—complex Young's modulus—to approximate the viscoelastic behavior of a solid propellant material in the frequency band of interest for coupled load analysis. A scaled-down test article inspired to the Z9-ignition Vega launcher configuration is designed, manufactured, and experimentally tested to obtain data for validation of the equivalent viscous damping methods. The Z9-like component of the test article is filled with a viscoelastic material representative of the Z9 solid propellant that is also preliminarily tested to investigate the dependency of the complex Young's modulus on the excitation frequency and provide data for the test article finite element model. Experimental results from seismic and shock tests performed on the test configuration are correlated with numerical results from frequency and time domain analyses carried out on its dynamically condensed finite element model to assess the applicability of different equivalent viscous damping methods to describe

  2. Development and validation of the occupational identity scale.

    PubMed

    Melgosa, J

    1987-12-01

    Ego-identity research utilizing Marcia's (1966) identity statuses has been prolific during the past 15 years. The four types of statuses--achievement, moratorium, foreclosure, diffusion--have become part of the ego-identity development theory. The development of a research tool to study further one of the dimensions of ego-identity development (occupational dimension) was perceived as a need. Therefore, items were created utilizing the criteria established by previous research and content validated by a group of experts. These statements were validated by 417 students from six high schools and colleges. Responses were analyzed and measures of construct and concurrent validity were obtained. Also indexes of internal consistency and item discrimination were estimated. Through factor analysis techniques, four factors were identified for the occupational identity statuses. They accounted for 49 per cent of the total variance. Reliability coefficients ranged between 0.70 and 0.87. Concurrent validity coefficients ranged between 0.38 and 0.79, when correlated with a similar instrument. After deletion of those items that did not contribute significantly to the validity of the instrument, a 28-item Occupational Identity Scale was established.

  3. A validation of the construct and reliability of an emotional intelligence scale applied to nursing students1

    PubMed Central

    Espinoza-Venegas, Maritza; Sanhueza-Alvarado, Olivia; Ramírez-Elizondo, Noé; Sáez-Carrillo, Katia

    2015-01-01

    OBJECTIVE: The current study aimed to validate the construct and reliability of an emotional intelligence scale. METHOD: The Trait Meta-Mood Scale-24 was applied to 349 nursing students. The process included content validation, which involved expert reviews, pilot testing, measurements of reliability using Cronbach's alpha, and factor analysis to corroborate the validity of the theoretical model's construct. RESULTS: Adequate Cronbach coefficients were obtained for all three dimensions, and factor analysis confirmed the scale's dimensions (perception, comprehension, and regulation). CONCLUSION: The Trait Meta-Mood Scale is a reliable and valid tool to measure the emotional intelligence of nursing students. Its use allows for accurate determinations of individuals' abilities to interpret and manage emotions. At the same time, this new construct is of potential importance for measurements in nursing leadership; educational, organizational, and personal improvements; and the establishment of effective relationships with patients. PMID:25806642

  4. Health-Related Quality of Life after Radical Cystectomy for Bladder Cancer in Elderly Patients with Ileal Orthotopic Neobladder or Ileal Conduit: Results from a Multicentre Cross-Sectional Study Using Validated Questionnaires.

    PubMed

    Cerruto, Maria Angela; D'Elia, Carolina; Siracusano, Salvatore; Saleh, Omar; Gacci, Mauro; Cacciamani, Giovanni; De Marco, Vincenzo; Porcaro, Antonio Benito; Balzarro, Matteo; Niero, Mauro; Lonardi, Cristina; Iafrate, Massimo; Bassi, Pierfrancesco; Imbimbo, Ciro; Racioppi, Marco; Talamini, Renato; Ciciliato, Stefano; Serni, Sergio; Carini, Marco; Verze, Paolo; Artibani, Walter

    2018-01-01

    To evaluate health-related quality of life (HR-QoL) outcomes in elderly patients with different type of urinary diversion (UD), ileal conduit (IC) and ileal orthotopic neobladder (IONB), after radical cystectomy (RC) for bladder cancer, by using validated self-reported cancer-specific instruments. We retrospectively reviewed 77 patients who received an IC or an IONB after RC. HR-QoL was assessed with specific and validated disease questionnaires, administered at last follow-up. At univariate analysis, at a mean follow-up of 60.91 ± 5.63 months, IONB results were favourable with regard to the following HR-QoL aspects: nausea and vomiting (p = 0.045), pain (p = 0.049), appetite loss (p = 0.03), constipation (p = 0.000), financial impact (p = 0.012) and cognitive functioning (p = 0.000). This last functional aspect was significantly worse in female patients (p = 0.029). Emotional functioning was significantly better in patients without long-term complications (p = 0.016). At multivariate analysis, male gender and IONB were independent predictors of better cognitive functioning, while long-term complications negatively affected emotional functioning. Obtained results suggest that an IONB can also be suitable for elderly patients compared with an IC with few and selected advantages in favour of the former UD. Preoperative patient's selection, counselling, education and active participation in the decision-making process lead to a more suitable choice of treatment. © 2018 S. Karger AG, Basel.

  5. Maximizing the Information and Validity of a Linear Composite in the Factor Analysis Model for Continuous Item Responses

    ERIC Educational Resources Information Center

    Ferrando, Pere J.

    2008-01-01

    This paper develops results and procedures for obtaining linear composites of factor scores that maximize: (a) test information, and (b) validity with respect to external variables in the multiple factor analysis (FA) model. I treat FA as a multidimensional item response theory model, and use Ackerman's multidimensional information approach based…

  6. Development and Validation of the Controller Acceptance Rating Scale (CARS): Results of Empirical Research

    NASA Technical Reports Server (NTRS)

    Lee, Katharine K.; Kerns, Karol; Bone, Randall

    2001-01-01

    The measurement of operational acceptability is important for the development, implementation, and evolution of air traffic management decision support tools. The Controller Acceptance Rating Scale was developed at NASA Ames Research Center for the development and evaluation of the Passive Final Approach Spacing Tool. CARS was modeled after a well-known pilot evaluation rating instrument, the Cooper-Harper Scale, and has since been used in the evaluation of the User Request Evaluation Tool, developed by MITRE's Center for Advanced Aviation System Development. In this paper, we provide a discussion of the development of CARS and an analysis of the empirical data collected with CARS to examine construct validity. Results of intraclass correlations indicated statistically significant reliability for the CARS. From the subjective workload data that were collected in conjunction with the CARS, it appears that the expected set of workload attributes was correlated with the CARS. As expected, the analysis also showed that CARS was a sensitive indicator of the impact of decision support tools on controller operations. Suggestions for future CARS development and its improvement are also provided.

  7. Hospital blood bank information systems accurately reflect patient transfusion: results of a validation study.

    PubMed

    McQuilten, Zoe K; Schembri, Nikita; Polizzotto, Mark N; Akers, Christine; Wills, Melissa; Cole-Sinclair, Merrole F; Whitehead, Susan; Wood, Erica M; Phillips, Louise E

    2011-05-01

    Hospital transfusion laboratories collect information regarding blood transfusion and some registries gather clinical outcomes data without transfusion information, providing an opportunity to integrate these two sources to explore effects of transfusion on clinical outcomes. However, the use of laboratory information system (LIS) data for this purpose has not been validated previously. Validation of LIS data against individual patient records was undertaken at two major centers. Data regarding all transfusion episodes were analyzed over seven 24-hour periods. Data regarding 596 units were captured including 399 red blood cell (RBC), 95 platelet (PLT), 72 plasma, and 30 cryoprecipitate units. They were issued to: inpatient 221 (37.1%), intensive care 109 (18.3%), outpatient 95 (15.9%), operating theater 45 (7.6%), emergency department 27 (4.5%), and unrecorded 99 (16.6%). All products recorded by LIS as issued were documented as transfused to intended patients. Median time from issue to transfusion initiation could be calculated for 535 (89.8%) components: RBCs 16 minutes (95% confidence interval [CI], 15-18 min; interquartile range [IQR], 7-30 min), PLTs 20 minutes (95% CI, 15-22 min; IQR, 10-37 min), fresh-frozen plasma 33 minutes (95% CI, 14-83 min; IQR, 11-134 min), and cryoprecipitate 3 minutes (95% CI, -10 to 42 min; IQR, -15 to 116 min). Across a range of blood component types and destinations comparison of LIS data with clinical records demonstrated concordance. The difference between LIS timing data and patient clinical records reflects expected time to transport, check, and prepare transfusion but does not affect the validity of linkage for most research purposes. Linkage of clinical registries with LIS data can therefore provide robust information regarding individual patient transfusion. This enables analysis of joint data sets to determine the impact of transfusion on clinical outcomes. © 2010 American Association of Blood Banks.

  8. Validation of general job satisfaction in the Korean Labor and Income Panel Study.

    PubMed

    Park, Shin Goo; Hwang, Sang Hee

    2017-01-01

    The purpose of this study is to assess the validity and reliability of general job satisfaction (JS) in the Korean Labor and Income Panel Study (KLIPS). We used the data from the 17th wave (2014) of the nationwide KLIPS, which selected a representative panel sample of Korean households and individuals aged 15 or older residing in urban areas. We included in this study 7679 employed subjects (4529 males and 3150 females). The general JS instrument consisted of five items rated on a scale from 1 (strongly disagree) to 5 (strongly agree). The general JS reliability was assessed using the corrected item-total correlation and Cronbach's alpha coefficient. The validity of general JS was assessed using confirmatory factor analysis (CFA) and Pearson's correlation. The corrected item-total correlations ranged from 0.736 to 0.837. Therefore, no items were removed. Cronbach's alpha for general JS was 0.925, indicating excellent internal consistency. The CFA of the general JS model showed a good fit. Pearson's correlation coefficients for convergent validity showed moderate or strong correlations. The results obtained in our study confirm the validity and reliability of general JS.

  9. Impact of Cognitive Abilities and Prior Knowledge on Complex Problem Solving Performance - Empirical Results and a Plea for Ecologically Valid Microworlds.

    PubMed

    Süß, Heinz-Martin; Kretzschmar, André

    2018-01-01

    The original aim of complex problem solving (CPS) research was to bring the cognitive demands of complex real-life problems into the lab in order to investigate problem solving behavior and performance under controlled conditions. Up until now, the validity of psychometric intelligence constructs has been scrutinized with regard to its importance for CPS performance. At the same time, different CPS measurement approaches competing for the title of the best way to assess CPS have been developed. In the first part of the paper, we investigate the predictability of CPS performance on the basis of the Berlin Intelligence Structure Model and Cattell's investment theory as well as an elaborated knowledge taxonomy. In the first study, 137 students managed a simulated shirt factory ( Tailorshop ; i.e., a complex real life-oriented system) twice, while in the second study, 152 students completed a forestry scenario ( FSYS ; i.e., a complex artificial world system). The results indicate that reasoning - specifically numerical reasoning (Studies 1 and 2) and figural reasoning (Study 2) - are the only relevant predictors among the intelligence constructs. We discuss the results with reference to the Brunswik symmetry principle. Path models suggest that reasoning and prior knowledge influence problem solving performance in the Tailorshop scenario mainly indirectly. In addition, different types of system-specific knowledge independently contribute to predicting CPS performance. The results of Study 2 indicate that working memory capacity, assessed as an additional predictor, has no incremental validity beyond reasoning. We conclude that (1) cognitive abilities and prior knowledge are substantial predictors of CPS performance, and (2) in contrast to former and recent interpretations, there is insufficient evidence to consider CPS a unique ability construct. In the second part of the paper, we discuss our results in light of recent CPS research, which predominantly utilizes the

  10. Physical control oriented model of large scale refrigerators to synthesize advanced control schemes. Design, validation, and first control results

    NASA Astrophysics Data System (ADS)

    Bonne, François; Alamir, Mazen; Bonnay, Patrick

    2014-01-01

    In this paper, a physical method to obtain control-oriented dynamical models of large scale cryogenic refrigerators is proposed, in order to synthesize model-based advanced control schemes. These schemes aim to replace classical user experience designed approaches usually based on many independent PI controllers. This is particularly useful in the case where cryoplants are submitted to large pulsed thermal loads, expected to take place in the cryogenic cooling systems of future fusion reactors such as the International Thermonuclear Experimental Reactor (ITER) or the Japan Torus-60 Super Advanced Fusion Experiment (JT-60SA). Advanced control schemes lead to a better perturbation immunity and rejection, to offer a safer utilization of cryoplants. The paper gives details on how basic components used in the field of large scale helium refrigeration (especially those present on the 400W @1.8K helium test facility at CEA-Grenoble) are modeled and assembled to obtain the complete dynamic description of controllable subsystems of the refrigerator (controllable subsystems are namely the Joule-Thompson Cycle, the Brayton Cycle, the Liquid Nitrogen Precooling Unit and the Warm Compression Station). The complete 400W @1.8K (in the 400W @4.4K configuration) helium test facility model is then validated against experimental data and the optimal control of both the Joule-Thompson valve and the turbine valve is proposed, to stabilize the plant under highly variable thermals loads. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.

  11. Physical control oriented model of large scale refrigerators to synthesize advanced control schemes. Design, validation, and first control results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonne, François; Bonnay, Patrick; Alamir, Mazen

    2014-01-29

    In this paper, a physical method to obtain control-oriented dynamical models of large scale cryogenic refrigerators is proposed, in order to synthesize model-based advanced control schemes. These schemes aim to replace classical user experience designed approaches usually based on many independent PI controllers. This is particularly useful in the case where cryoplants are submitted to large pulsed thermal loads, expected to take place in the cryogenic cooling systems of future fusion reactors such as the International Thermonuclear Experimental Reactor (ITER) or the Japan Torus-60 Super Advanced Fusion Experiment (JT-60SA). Advanced control schemes lead to a better perturbation immunity and rejection,more » to offer a safer utilization of cryoplants. The paper gives details on how basic components used in the field of large scale helium refrigeration (especially those present on the 400W @1.8K helium test facility at CEA-Grenoble) are modeled and assembled to obtain the complete dynamic description of controllable subsystems of the refrigerator (controllable subsystems are namely the Joule-Thompson Cycle, the Brayton Cycle, the Liquid Nitrogen Precooling Unit and the Warm Compression Station). The complete 400W @1.8K (in the 400W @4.4K configuration) helium test facility model is then validated against experimental data and the optimal control of both the Joule-Thompson valve and the turbine valve is proposed, to stabilize the plant under highly variable thermals loads. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.« less

  12. Incident CTS in a large pooled cohort study: associations obtained by a Job Exposure Matrix versus associations obtained from observed exposures.

    PubMed

    Dale, Ann Marie; Ekenga, Christine C; Buckner-Petty, Skye; Merlino, Linda; Thiese, Matthew S; Bao, Stephen; Meyers, Alysha Rose; Harris-Adamson, Carisa; Kapellusch, Jay; Eisen, Ellen A; Gerr, Fred; Hegmann, Kurt T; Silverstein, Barbara; Garg, Arun; Rempel, David; Zeringue, Angelique; Evanoff, Bradley A

    2018-03-29

    There is growing use of a job exposure matrix (JEM) to provide exposure estimates in studies of work-related musculoskeletal disorders; few studies have examined the validity of such estimates, nor did compare associations obtained with a JEM with those obtained using other exposures. This study estimated upper extremity exposures using a JEM derived from a publicly available data set (Occupational Network, O*NET), and compared exposure-disease associations for incident carpal tunnel syndrome (CTS) with those obtained using observed physical exposure measures in a large prospective study. 2393 workers from several industries were followed for up to 2.8 years (5.5 person-years). Standard Occupational Classification (SOC) codes were assigned to the job at enrolment. SOC codes linked to physical exposures for forceful hand exertion and repetitive activities were extracted from O*NET. We used multivariable Cox proportional hazards regression models to describe exposure-disease associations for incident CTS for individually observed physical exposures and JEM exposures from O*NET. Both exposure methods found associations between incident CTS and exposures of force and repetition, with evidence of dose-response. Observed associations were similar across the two methods, with somewhat wider CIs for HRs calculated using the JEM method. Exposures estimated using a JEM provided similar exposure-disease associations for CTS when compared with associations obtained using the 'gold standard' method of individual observation. While JEMs have a number of limitations, in some studies they can provide useful exposure estimates in the absence of individual-level observed exposures. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  13. A new warfarin dosing algorithm including VKORC1 3730 G > A polymorphism: comparison with results obtained by other published algorithms.

    PubMed

    Cini, Michela; Legnani, Cristina; Cosmi, Benilde; Guazzaloca, Giuliana; Valdrè, Lelia; Frascaro, Mirella; Palareti, Gualtiero

    2012-08-01

    Warfarin dosing is affected by clinical and genetic variants, but the contribution of the genotype associated with warfarin resistance in pharmacogenetic algorithms has not been well assessed yet. We developed a new dosing algorithm including polymorphisms associated both with warfarin sensitivity and resistance in the Italian population, and its performance was compared with those of eight previously published algorithms. Clinical and genetic data (CYP2C9*2, CYP2C9*3, VKORC1 -1639 G > A, and VKORC1 3730 G > A) were used to elaborate the new algorithm. Derivation and validation groups comprised 55 (58.2% men, mean age 69 years) and 40 (57.5% men, mean age 70 years) patients, respectively, who were on stable anticoagulation therapy for at least 3 months with different oral anticoagulation therapy (OAT) indications. Performance of the new algorithm, evaluated with mean absolute error (MAE) defined as the absolute value of the difference between observed daily maintenance dose and predicted daily dose, correlation with the observed dose and R(2) value, was comparable with or slightly lower than that obtained using the other algorithms. The new algorithm could correctly assign 53.3%, 50.0%, and 57.1% of patients to the low (≤25 mg/week), intermediate (26-44 mg/week) and high (≥ 45 mg/week) dosing range, respectively. Our data showed a significant increase in predictive accuracy among patients requiring high warfarin dose compared with the other algorithms (ranging from 0% to 28.6%). The algorithm including VKORC1 3730 G > A, associated with warfarin resistance, allowed a more accurate identification of resistant patients who require higher warfarin dosage.

  14. [Validation of the IBS-SSS].

    PubMed

    Betz, C; Mannsdörfer, K; Bischoff, S C

    2013-10-01

    Irritable bowel syndrome (IBS) is a functional gastrointestinal disorder characterised by abdominal pain, associated with stool abnormalities and changes in stool consistency. Diagnosis of IBS is based on characteristic symptoms and exclusion of other gastrointestinal diseases. A number of questionnaires exist to assist diagnosis and assessment of severity of the disease. One of these is the irritable bowel syndrome - severity scoring system (IBS-SSS). The IBS-SSS was validated 1997 in its English version. In the present study, the IBS-SSS has been validated in German language. To do this, a cohort of 60 patients with IBS according to the Rome III criteria, was compared with a control group of healthy individuals (n = 38). We studied sensitivity and reproducibility of the score, as well as the sensitivity to detect changes of symptom severity. The results of the German validation largely reflect the results of the English validation. The German version of the IBS-SSS is also a valid, meaningful and reproducible questionnaire with a high sensitivity to assess changes in symptom severity, especially in IBS patients with moderate symptoms. It is unclear if the IBS-SSS is also a valid questionnaire in IBS patients with severe symptoms because this group of patients was not studied. © Georg Thieme Verlag KG Stuttgart · New York.

  15. Transcultural adaptation and validation of the patient empowerment in long-term conditions questionnaire.

    PubMed

    Garcimartin, Paloma; Comin-Colet, Josep; Delgado-Hito, Pilar; Badosa-Marcé, Neus; Linas-Alonso, Anna

    2017-05-04

    Patient empowerment is a key element to improve the results in health, increase satisfaction amongst users and obtain higher treatment compliance. The main objective of this study is to validate the Spanish version of the questionnaire "Patient empowerment in long-term conditions" which evaluates the patients' level of empowerment of chronic diseases. The secondary objective is to identify factors which predict basal empowerment and changes (improvement or deterioration) in patients with Heart Failure (HF). An observational and prospective design of psychometric type to validate a questionnaire (aim 1) and a prospective study of cohorts (aim 2). The study will include 121 patients with confirmed diagnosis of HF. Three measurements (basal, at 15 days and at 3 months) will be carried out: quality of life, self-care and empowerment. Descriptive and inferential analyses will be used. For the first aim of the study (validation), the test-retest reproducibility will be assessed through intraclass correlation coefficient; internal consistency will be assessed through Cronbach's alpha coefficient; construct validity through Pearson's correlation coefficient; and sensibility to change through effect size coefficient. Set a valid questionnaire to measure the level of empowerment of patients with chronic diseases could be an effective tool to assess the results from the provision of the health care services. It will also allow us to identify at an early stage, those groups of patients with a low level of empowerment. Hence, they could become a risk group due to poor management of the disease, with a high rate of decompensation and a higher use rate of the health system resources.

  16. Health Sciences-Evidence Based Practice questionnaire (HS-EBP) for measuring transprofessional evidence-based practice: Creation, development and psychometric validation

    PubMed Central

    Fernández-Domínguez, Juan Carlos; de Pedro-Gómez, Joan Ernest; Morales-Asencio, José Miguel; Sastre-Fullana, Pedro; Sesé-Abad, Albert

    2017-01-01

    Introduction Most of the EBP measuring instruments available to date present limitations both in the operationalisation of the construct and also in the rigour of their psychometric development, as revealed in the literature review performed. The aim of this paper is to provide rigorous and adequate reliability and validity evidence of the scores of a new transdisciplinary psychometric tool, the Health Sciences Evidence-Based Practice (HS-EBP), for measuring the construct EBP in Health Sciences professionals. Methods A pilot study and a subsequent two-stage validation test sample were conducted to progressively refine the instrument until a reduced 60-item version with a five-factor latent structure. Reliability was analysed through both Cronbach’s alpha coefficient and intraclass correlations (ICC). Latent structure was contrasted using confirmatory factor analysis (CFA) following a model comparison aproach. Evidence of criterion validity of the scores obtained was achieved by considering attitudinal resistance to change, burnout, and quality of professional life as criterion variables; while convergent validity was assessed using the Spanish version of the Evidence-Based Practice Questionnaire (EBPQ-19). Results Adequate evidence of both reliability and ICC was obtained for the five dimensions of the questionnaire. According to the CFA model comparison, the best fit corresponded to the five-factor model (RMSEA = 0.049; CI 90% RMSEA = [0.047; 0.050]; CFI = 0.99). Adequate criterion and convergent validity evidence was also provided. Finally, the HS-EBP showed the capability to find differences between EBP training levels as an important evidence of decision validity. Conclusions Reliability and validity evidence obtained regarding the HS-EBP confirm the adequate operationalisation of the EBP construct as a process put into practice to respond to every clinical situation arising in the daily practice of professionals in health sciences (transprofessional). The

  17. Internal validation of two new retrotransposons-based kits (InnoQuant® HY and InnoTyper® 21) at a forensic lab.

    PubMed

    Martins, Cátia; Ferreira, Paulo Miguel; Carvalho, Raquel; Costa, Sandra Cristina; Farinha, Carlos; Azevedo, Luísa; Amorim, António; Oliveira, Manuela

    2018-02-01

    Obtaining a genetic profile from pieces of evidence collected at a crime scene is the primary objective of forensic laboratories. New procedures, methods, kits, software or equipment must be carefully evaluated and validated before its implementation. The constant development of new methodologies for DNA testing leads to a steady process of validation, which consists of demonstrating that the technology is robust, reproducible, and reliable throughout a defined range of conditions. The present work aims to internally validate two new retrotransposon-based kits (InnoQuant ® HY and InnoTyper ® 21), under the working conditions of the Laboratório de Polícia Científica da Polícia Judiciária (LPC-PJ). For the internal validation of InnoQuant ® HY and InnoTyper ® 21 sensitivity, repeatability, reproducibility, and mixture tests and a concordance study between these new kits and those currently in use at LPC-PJ (Quantifiler ® Duo and GlobalFiler™) were performed. The results obtained for sensitivity, repeatability, and reproducibility tests demonstrated that both InnoQuant ® HY and InnoTyper ® 21 are robust, reproducible, and reliable. The results of the concordance studies demonstrate that InnoQuant ® HY produced quantification results in nearly 29% more than Quantifiler ® Duo (indicating that this new kit is more effective in challenging samples), while the differences observed between InnoTyper ® 21 and GlobalFiler™ are not significant. Therefore, the utility of InnoTyper ® 21 has been proven, especially by the successful amplification of a greater number of complete genetic profiles (27 vs. 21). The results herein presented allowed the internal validation of both InnoQuant ® HY and InnoTyper ® 21, and their implementation in the LPC-PJ laboratory routine for the treatment of challenging samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Quality assessment of two- and three-dimensional unstructured meshes and validation of an upwind Euler flow solver

    NASA Technical Reports Server (NTRS)

    Woodard, Paul R.; Batina, John T.; Yang, Henry T. Y.

    1992-01-01

    Quality assessment procedures are described for two-dimensional unstructured meshes. The procedures include measurement of minimum angles, element aspect ratios, stretching, and element skewness. Meshes about the ONERA M6 wing and the Boeing 747 transport configuration are generated using an advancing front method grid generation package of programs. Solutions of Euler's equations for these meshes are obtained at low angle-of-attack, transonic conditions. Results for these cases, obtained as part of a validation study demonstrate accuracy of an implicit upwind Euler solution algorithm.

  19. Validation of model-based brain shift correction in neurosurgery via intraoperative magnetic resonance imaging: preliminary results

    NASA Astrophysics Data System (ADS)

    Luo, Ma; Frisken, Sarah F.; Weis, Jared A.; Clements, Logan W.; Unadkat, Prashin; Thompson, Reid C.; Golby, Alexandra J.; Miga, Michael I.

    2017-03-01

    The quality of brain tumor resection surgery is dependent on the spatial agreement between preoperative image and intraoperative anatomy. However, brain shift compromises the aforementioned alignment. Currently, the clinical standard to monitor brain shift is intraoperative magnetic resonance (iMR). While iMR provides better understanding of brain shift, its cost and encumbrance is a consideration for medical centers. Hence, we are developing a model-based method that can be a complementary technology to address brain shift in standard resections, with resource-intensive cases as referrals for iMR facilities. Our strategy constructs a deformation `atlas' containing potential deformation solutions derived from a biomechanical model that account for variables such as cerebrospinal fluid drainage and mannitol effects. Volumetric deformation is estimated with an inverse approach that determines the optimal combinatory `atlas' solution fit to best match measured surface deformation. Accordingly, preoperative image is updated based on the computed deformation field. This study is the latest development to validate our methodology with iMR. Briefly, preoperative and intraoperative MR images of 2 patients were acquired. Homologous surface points were selected on preoperative and intraoperative scans as measurement of surface deformation and used to drive the inverse problem. To assess the model accuracy, subsurface shift of targets between preoperative and intraoperative states was measured and compared to model prediction. Considering subsurface shift above 3 mm, the proposed strategy provides an average shift correction of 59% across 2 cases. While further improvements in both the model and ability to validate with iMR are desired, the results reported are encouraging.

  20. German validation of the Conners Adult ADHD Rating Scales (CAARS) II: reliability, validity, diagnostic sensitivity and specificity.

    PubMed

    Christiansen, H; Kis, B; Hirsch, O; Matthies, S; Hebebrand, J; Uekermann, J; Abdel-Hamid, M; Kraemer, M; Wiltfang, J; Graf, E; Colla, M; Sobanski, E; Alm, B; Rösler, M; Jacob, C; Jans, T; Huss, M; Schimmelmann, B G; Philipsen, A

    2012-07-01

    The German version of the Conners Adult ADHD Rating Scales (CAARS) has proven to show very high model fit in confirmative factor analyses with the established factors inattention/memory problems, hyperactivity/restlessness, impulsivity/emotional lability, and problems with self-concept in both large healthy control and ADHD patient samples. This study now presents data on the psychometric properties of the German CAARS-self-report (CAARS-S) and observer-report (CAARS-O) questionnaires. CAARS-S/O and questions on sociodemographic variables were filled out by 466 patients with ADHD, 847 healthy control subjects that already participated in two prior studies, and a total of 896 observer data sets were available. Cronbach's-alpha was calculated to obtain internal reliability coefficients. Pearson correlations were performed to assess test-retest reliability, and concurrent, criterion, and discriminant validity. Receiver Operating Characteristics (ROC-analyses) were used to establish sensitivity and specificity for all subscales. Coefficient alphas ranged from .74 to .95, and test-retest reliability from .85 to .92 for the CAARS-S, and from .65 to .85 for the CAARS-O. All CAARS subscales, except problems with self-concept correlated significantly with the Barrett Impulsiveness Scale (BIS), but not with the Wender Utah Rating Scale (WURS). Criterion validity was established with ADHD subtype and diagnosis based on DSM-IV criteria. Sensitivity and specificity were high for all four subscales. The reported results confirm our previous study and show that the German CAARS-S/O do indeed represent a reliable and cross-culturally valid measure of current ADHD symptoms in adults. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  1. Validation of soil hydraulic pedotransfer functions at the local and catchment scale for an Indonesian basin

    NASA Astrophysics Data System (ADS)

    Booij, Martijn J.; Oldhoff, Ruben J. J.; Rustanto, Andry

    2016-04-01

    In order to accurately model the hydrological processes in a catchment, information on the soil hydraulic properties is of great importance. These data can be obtained by conducting field work, which is costly and time consuming, or by using pedotransfer functions (PTFs). A PTF is an empirical relationship between easily obtainable soil characteristics and a soil hydraulic parameter. In this study, PTFs for the saturated hydraulic conductivity (Ks) and the available water content (AWC) are investigated. PTFs are area-specific, since for instance tropical soils often have a different composition and hydraulic behaviour compared to temperate soils. Application of temperate soil PTFs on tropical soils might result in poor performance, which is a problem as few tropical soil PTFs are available. The objective of this study is to determine whether Ks and AWC can be accurately approximated using PTFs, by analysing their performance at both the local scale and the catchment scale. Four published PTFs for Ks and AWC are validated on a data set of 91 soil samples collected in the Upper Bengawan Solo catchment on Java, Indonesia. The AWC is predicted very poorly, with Nash-Sutcliffe Efficiency (NSE) values below zero for all selected PTFs. For Ks PTFs better results were found. The Wösten and Rosetta-3 PTFs predict the Ks moderately accurate, with NSE values of 0.28 and 0.39, respectively. New PTFs for both AWC and Ks were developed using multiple linear regression and NSE values of 0.37 (AWC) and 0.55 (Ks) were obtained. Although these values are not very high, they are significantly higher than for the published PTFs. The hydrological SWAT model was set up for the Keduang, a sub-catchment of the Upper Bengawan Solo River, to simulate monthly catchment streamflow. Eleven cases were defined to validate the PTFs at the catchment scale. For the Ks-PTF cases NSE values of around 0.84 were obtained for the validation period. The use of AWC PTFs resulted in slightly lower NSE

  2. Alphabus Mechanical Validation Plan and Test Campaign

    NASA Astrophysics Data System (ADS)

    Calvisi, G.; Bonnet, D.; Belliol, P.; Lodereau, P.; Redoundo, R.

    2012-07-01

    A joint team of the two leading European satellite companies (Astrium and Thales Alenia Space) worked with the support of ESA and CNES to define a product line able to efficiently address the upper segment of communications satellites : Alphabus Starting in 2009 and up to 2011 the mechanical validation of the Alphabus platform has been obtained thanks to static tests performed on dedicated static model and to environmental test performed on the first satellite based on Alphabus: Alphasat I-XL. The mechanical validation of the Alphabus platform presented an excellent opportunity to improve the validation and qualification process, with respect to static, sine vibrations, acoustic and L/V shock environment, minimizing recurrent cost of manufacturing, integration and testing. A main driver on mechanical testing is that mechanical acceptance testing at satellite level will be performed with empty tanks due to technical constraints (limitation of existing vibration devices) and programmatic advantages (test risk reduction, test schedule minimization). In this paper the impacts that such testing logic have on validation plan are briefly recalled and its actual application for Alphasat PFM mechanical test campaign is detailed.

  3. Verification and Validation of Adaptive and Intelligent Systems with Flight Test Results

    NASA Technical Reports Server (NTRS)

    Burken, John J.; Larson, Richard R.

    2009-01-01

    F-15 IFCS project goals are: a) Demonstrate Control Approaches that can Efficiently Optimize Aircraft Performance in both Normal and Failure Conditions [A] & [B] failures. b) Advance Neural Network-Based Flight Control Technology for New Aerospace Systems Designs with a Pilot in the Loop. Gen II objectives include; a) Implement and Fly a Direct Adaptive Neural Network Based Flight Controller; b) Demonstrate the Ability of the System to Adapt to Simulated System Failures: 1) Suppress Transients Associated with Failure; 2) Re-Establish Sufficient Control and Handling of Vehicle for Safe Recovery. c) Provide Flight Experience for Development of Verification and Validation Processes for Flight Critical Neural Network Software.

  4. The development and validation of a test of science critical thinking for fifth graders.

    PubMed

    Mapeala, Ruslan; Siew, Nyet Moi

    2015-01-01

    The paper described the development and validation of the Test of Science Critical Thinking (TSCT) to measure the three critical thinking skill constructs: comparing and contrasting, sequencing, and identifying cause and effect. The initial TSCT consisted of 55 multiple choice test items, each of which required participants to select a correct response and a correct choice of critical thinking used for their response. Data were obtained from a purposive sampling of 30 fifth graders in a pilot study carried out in a primary school in Sabah, Malaysia. Students underwent the sessions of teaching and learning activities for 9 weeks using the Thinking Maps-aided Problem-Based Learning Module before they answered the TSCT test. Analyses were conducted to check on difficulty index (p) and discrimination index (d), internal consistency reliability, content validity, and face validity. Analysis of the test-retest reliability data was conducted separately for a group of fifth graders with similar ability. Findings of the pilot study showed that out of initial 55 administered items, only 30 items with relatively good difficulty index (p) ranged from 0.40 to 0.60 and with good discrimination index (d) ranged within 0.20-1.00 were selected. The Kuder-Richardson reliability value was found to be appropriate and relatively high with 0.70, 0.73 and 0.92 for identifying cause and effect, sequencing, and comparing and contrasting respectively. The content validity index obtained from three expert judgments equalled or exceeded 0.95. In addition, test-retest reliability showed good, statistically significant correlations ([Formula: see text]). From the above results, the selected 30-item TSCT was found to have sufficient reliability and validity and would therefore represent a useful tool for measuring critical thinking ability among fifth graders in primary science.

  5. Validation of Aura Data: Needs and Implementation

    NASA Astrophysics Data System (ADS)

    Froidevaux, L.; Douglass, A. R.; Schoeberl, M. R.; Hilsenrath, E.; Kinnison, D. E.; Kroon, M.; Sander, S. P.

    2003-12-01

    Validation of Aura data: needs and implementation L. Froidevaux, A. R. Douglass, M. R. Schoeberl, E. Hilsenrath, D. Kinnison, M. Kroon, and S. P. Sander We describe the needs for validation of the Aura scientific data products expected in 2004 and for several years thereafter, as well as the implementation plan to fullfill these needs. Many profiles of stratospheric and tropospheric composition are expected from the combination of four instruments aboard Aura, along with column abundances, aerosol and cloud information. The Aura validation working group and the Aura Project have been developing programs and collaborations that are expected to lead to a significant number of validation activities after the Aura launch (in early 2004). Spatial and temporal variability in the lower stratosphere and troposphere present challenges to validation of Aura measurements even where cloud contamination effects can be minimized. Data from ground-based networks, balloons, and other satellites will contribute in a major way to Aura data validation. In addition, plans are in place to obtain correlative data for special conditions, such as profiles of O3 and NO2 in polluted areas. Several aircraft campaigns planned for the 2004-2007 time period will provide additional tropospheric and lower stratospheric validation opportunities for Aura; some atmospheric science goals will be addressed by the eventual combination of these data sets. A team of "Aura liaisons" will assist in the dissemination of information about various correlative measurements to be expected in the above timeframe, along with any needed protocols and agreements on data exchange and file formats. A data center is being established at the Goddard Space Flight Center to collect and distribute the various data files to be used in the validation of the Aura data.

  6. Stochastic Hourly Weather Generator HOWGH: Validation and its Use in Pest Modelling under Present and Future Climates

    NASA Astrophysics Data System (ADS)

    Dubrovsky, M.; Hirschi, M.; Spirig, C.

    2014-12-01

    To quantify impact of the climate change on a specific pest (or any weather-dependent process) in a specific site, we may use a site-calibrated pest (or other) model and compare its outputs obtained with site-specific weather data representing present vs. perturbed climates. The input weather data may be produced by the stochastic weather generator. Apart from the quality of the pest model, the reliability of the results obtained in such experiment depend on an ability of the generator to represent the statistical structure of the real world weather series, and on the sensitivity of the pest model to possible imperfections of the generator. This contribution deals with the multivariate HOWGH weather generator, which is based on a combination of parametric and non-parametric statistical methods. Here, HOWGH is used to generate synthetic hourly series of three weather variables (solar radiation, temperature and precipitation) required by a dynamic pest model SOPRA to simulate the development of codling moth. The contribution presents results of the direct and indirect validation of HOWGH. In the direct validation, the synthetic series generated by HOWGH (various settings of its underlying model are assumed) are validated in terms of multiple climatic characteristics, focusing on the subdaily wet/dry and hot/cold spells. In the indirect validation, we assess the generator in terms of characteristics derived from the outputs of SOPRA model fed by the observed vs. synthetic series. The weather generator may be used to produce weather series representing present and future climates. In the latter case, the parameters of the generator may be modified by the climate change scenarios based on Global or Regional Climate Models. To demonstrate this feature, the results of codling moth simulations for future climate will be shown. Acknowledgements: The weather generator is developed and validated within the frame of projects WG4VALUE (project LD12029 sponsored by the Ministry

  7. The Montreal Cognitive Assessment as a preliminary assessment tool in general psychiatry: Validity of MoCA in psychiatric patients.

    PubMed

    Gierus, J; Mosiołek, A; Koweszko, T; Wnukiewicz, P; Kozyra, O; Szulc, A

    2015-01-01

    The aim of the presented research was to obtain the initial data regarding the validity of Montreal Cognitive Assessment (MoCA) in diagnosing cognitive impairment in psychiatrically hospitalized patients. The results in MoCA obtained from 221 patients were analyzed in terms of proportional participation of patients with particular diagnosis in three result ranges. In 67 patients, additional version of the scale was also used. Comparative analysis of average results in particular diagnostic groups (organically based disorders, disorders due to psychoactive substance use, psychotic disorders, neurotic disorders and personality disorders) was also carried out, as well as an analysis of the scale's accuracy as a diagnostic test in detecting organic disorders. The reliability of the test measured with between tests correlation coefficient rho=0.92 (P=.000). Significant differences between particular diagnoses groups were detected (J-T=13736; P=.000). The cutoff points of 23 turned out to have a satisfactory sensitivity and specificity (0.82 and 0.70, respectively) in diagnosing organically based disorders. The area below the receiver operating characteristic curve (AUC=0.854; P=.000) suggests that MoCA has a satisfactory value as a classifier. The initial data suggest MoCA's high value in prediction of future diagnosis of organically based disorders. The initial results obtained in particular group of diagnoses support construct validity of the method. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Measurement and computer simulation of antennas on ships and aircraft for results of operational reliability

    NASA Astrophysics Data System (ADS)

    Kubina, Stanley J.

    1989-09-01

    The review of the status of computational electromagnetics by Miller and the exposition by Burke of the developments in one of the more important computer codes in the application of the electric field integral equation method, the Numerical Electromagnetic Code (NEC), coupled with Molinet's summary of progress in techniques based on the Geometrical Theory of Diffraction (GTD), provide a clear perspective on the maturity of the modern discipline of computational electromagnetics and its potential. Audone's exposition of the application to the computation of Radar Scattering Cross-section (RCS) is an indication of the breadth of practical applications and his exploitation of modern near-field measurement techniques reminds one of progress in the measurement discipline which is essential to the validation or calibration of computational modeling methodology when applied to complex structures such as aircraft and ships. The latter monograph also presents some comparison results with computational models. Some of the results presented for scale model and flight measurements show some serious disagreements in the lobe structure which would require some detailed examination. This also applies to the radiation patterns obtained by flight measurement compared with those obtained using wire-grid models and integral equation modeling methods. In the examples which follow, an attempt is made to match measurements results completely over the entire 2 to 30 MHz HF range for antennas on a large patrol aircraft. The problem of validating computer models of HF antennas on a helicopter and using computer models to generate radiation pattern information which cannot be obtained by measurements are discussed. The use of NEC computer models to analyze top-side ship configurations where measurement results are not available and only self-validation measures are available or at best comparisons with an alternate GTD computer modeling technique is also discussed.

  9. Fly's Eye GLM Simulator Preliminary Validation Analysis

    NASA Astrophysics Data System (ADS)

    Quick, M. G.; Christian, H. J., Jr.; Blakeslee, R. J.; Stewart, M. F.; Corredor, D.; Podgorny, S.

    2017-12-01

    As part of the validation effort for the Geostationary Lightning Mapper (GLM) an airborne radiometer array has been fabricated to observe lightning optical emission through the cloud top. The Fly's Eye GLM Simulator (FEGS) is a multi-spectral, photo-electric radiometer array with a nominal spatial resolution of 2 x 2 km and spatial footprint of 10 x 10 km at cloud top. A main 25 pixel array observes the 777.4 nm oxygen emission triplet using an optical passband filter with a 10 nm FWHM, a sampling rate of 100 kHz, and 16 bit resolution. From March to May of 2017 FEGS was flown on the NASA ER-2 high altitude aircraft during the GOES-R Validation Flight Campaign. Optical signatures of lightning were observed during a variety of thunderstorm scenarios while coincident measurements were obtained by GLM and ground based antennae networks. This presentation will describe the preliminary analysis of the FEGS dataset in the context of GLM validation.

  10. Comparison of sigma(o) obtained from the conventional definition with sigma(o) appearing in the radar equation for randomly rough surfaces

    NASA Technical Reports Server (NTRS)

    Levine, D. M.

    1981-01-01

    A comparison is made of the radar cross section of rough surface calculated in one case from the conventional definition and obtained in the second case directly from the radar equation. The validity of the conventional definition representing the cross section appearing in the radar equation is determined. The analysis is executed in the special case of perfectly conducting, randomly corrugated surfaces in the physical optics limit. The radar equation is obtained by solving for the radiation scattered from an arbitrary source back to a colocated antenna. The signal out of the receiving antenna is computed from this solution and the result put into a form recognizeable as the radar equation. The conventional definition is obtained by solving a similar problem but for backscatter from an incident planewave. It is shown that these tow forms for sigma are the same if the observer is far enough from the surface.

  11. Can We Study Autonomous Driving Comfort in Moving-Base Driving Simulators? A Validation Study.

    PubMed

    Bellem, Hanna; Klüver, Malte; Schrauf, Michael; Schöner, Hans-Peter; Hecht, Heiko; Krems, Josef F

    2017-05-01

    To lay the basis of studying autonomous driving comfort using driving simulators, we assessed the behavioral validity of two moving-base simulator configurations by contrasting them with a test-track setting. With increasing level of automation, driving comfort becomes increasingly important. Simulators provide a safe environment to study perceived comfort in autonomous driving. To date, however, no studies were conducted in relation to comfort in autonomous driving to determine the extent to which results from simulator studies can be transferred to on-road driving conditions. Participants ( N = 72) experienced six differently parameterized lane-change and deceleration maneuvers and subsequently rated the comfort of each scenario. One group of participants experienced the maneuvers on a test-track setting, whereas two other groups experienced them in one of two moving-base simulator configurations. We could demonstrate relative and absolute validity for one of the two simulator configurations. Subsequent analyses revealed that the validity of the simulator highly depends on the parameterization of the motion system. Moving-base simulation can be a useful research tool to study driving comfort in autonomous vehicles. However, our results point at a preference for subunity scaling factors for both lateral and longitudinal motion cues, which might be explained by an underestimation of speed in virtual environments. In line with previous studies, we recommend lateral- and longitudinal-motion scaling factors of approximately 50% to 60% in order to obtain valid results for both active and passive driving tasks.

  12. [Validation of a dietary habits questionnaire related to fats and sugars intake].

    PubMed

    Aráuz Hernández, Ana Gladys; Roselló Araya, Marlene; Guzmán Padilla, Sonia; Padilla Vargas, Gioconda

    2008-12-01

    The objective of this study was to design and validate a psychometric tool to measure dietary practices related to the intake of fats and sugars in a sample of overweight and obese adults. Classical test theory was applied. The validated construct was dietary habits, and the following theoretical dimensions were utilized: exclusion, modification, substitution and replacement. These had been previously defined in similar studies conducted in other countries. The tool was validated with 139 adults, males and females, with body mass indexes equal to or higher than 25. Construct validity for each section of the tool was obtained through factor analysis. The final tool was made up of 47 items. Cronbach's Alpha reliability coefficient was 0.948, which indicates a highly satisfactory internal consistency. Using sediment graph data and factor analysis of the four proposed theoretical dimensions of behavior, items were fused into two dimensions with a cumulative variance of 58%. These were renamed "elimination" and "modification". Cronbach's Alphas were 0.906 and 0.873, respectively, indicating a high level of reliability for construct measurement. Results show the need to adapt foreign tools to our socio-cultural context before utilizing them in interventions intended to modify dietary patterns, since these are interrelated to other aspects of the culture itself.

  13. Using Internet search engines to obtain medical information: a comparative study.

    PubMed

    Wang, Liupu; Wang, Juexin; Wang, Michael; Li, Yong; Liang, Yanchun; Xu, Dong

    2012-05-16

    The Internet has become one of the most important means to obtain health and medical information. It is often the first step in checking for basic information about a disease and its treatment. The search results are often useful to general users. Various search engines such as Google, Yahoo!, Bing, and Ask.com can play an important role in obtaining medical information for both medical professionals and lay people. However, the usability and effectiveness of various search engines for medical information have not been comprehensively compared and evaluated. To compare major Internet search engines in their usability of obtaining medical and health information. We applied usability testing as a software engineering technique and a standard industry practice to compare the four major search engines (Google, Yahoo!, Bing, and Ask.com) in obtaining health and medical information. For this purpose, we searched the keyword breast cancer in Google, Yahoo!, Bing, and Ask.com and saved the results of the top 200 links from each search engine. We combined nonredundant links from the four search engines and gave them to volunteer users in an alphabetical order. The volunteer users evaluated the websites and scored each website from 0 to 10 (lowest to highest) based on the usefulness of the content relevant to breast cancer. A medical expert identified six well-known websites related to breast cancer in advance as standards. We also used five keywords associated with breast cancer defined in the latest release of Systematized Nomenclature of Medicine-Clinical Terms (SNOMED CT) and analyzed their occurrence in the websites. Each search engine provided rich information related to breast cancer in the search results. All six standard websites were among the top 30 in search results of all four search engines. Google had the best search validity (in terms of whether a website could be opened), followed by Bing, Ask.com, and Yahoo!. The search results highly overlapped between the

  14. SATS HVO Concept Validation Experiment

    NASA Technical Reports Server (NTRS)

    Consiglio, Maria; Williams, Daniel; Murdoch, Jennifer; Adams, Catherine

    2005-01-01

    A human-in-the-loop simulation experiment was conducted at the NASA Langley Research Center s (LaRC) Air Traffic Operations Lab (ATOL) in an effort to comprehensively validate tools and procedures intended to enable the Small Aircraft Transportation System, Higher Volume Operations (SATS HVO) concept of operations. The SATS HVO procedures were developed to increase the rate of operations at non-towered, non-radar airports in near all-weather conditions. A key element of the design is the establishment of a volume of airspace around designated airports where pilots accept responsibility for self-separation. Flights operating at these airports, are given approach sequencing information computed by a ground based automated system. The SATS HVO validation experiment was conducted in the ATOL during the spring of 2004 in order to determine if a pilot can safely and proficiently fly an airplane while performing SATS HVO procedures. Comparative measures of flight path error, perceived workload and situation awareness were obtained for two types of scenarios. Baseline scenarios were representative of today s system utilizing procedure separation, where air traffic control grants one approach or departure clearance at a time. SATS HVO scenarios represented approaches and departure procedures as described in the SATS HVO concept of operations. Results from the experiment indicate that low time pilots were able to fly SATS HVO procedures and maintain self-separation as safely and proficiently as flying today's procedures.

  15. Computer simulation of Cerebral Arteriovenous Malformation-validation analysis of hemodynamics parameters.

    PubMed

    Kumar, Y Kiran; Mehta, Shashi Bhushan; Ramachandra, Manjunath

    2017-01-01

    The purpose of this work is to provide some validation methods for evaluating the hemodynamic assessment of Cerebral Arteriovenous Malformation (CAVM). This article emphasizes the importance of validating noninvasive measurements for CAVM patients, which are designed using lumped models for complex vessel structure. The validation of the hemodynamics assessment is based on invasive clinical measurements and cross-validation techniques with the Philips proprietary validated software's Qflow and 2D Perfursion. The modeling results are validated for 30 CAVM patients for 150 vessel locations. Mean flow, diameter, and pressure were compared between modeling results and with clinical/cross validation measurements, using an independent two-tailed Student t test. Exponential regression analysis was used to assess the relationship between blood flow, vessel diameter, and pressure between them. Univariate analysis is used to assess the relationship between vessel diameter, vessel cross-sectional area, AVM volume, AVM pressure, and AVM flow results were performed with linear or exponential regression. Modeling results were compared with clinical measurements from vessel locations of cerebral regions. Also, the model is cross validated with Philips proprietary validated software's Qflow and 2D Perfursion. Our results shows that modeling results and clinical results are nearly matching with a small deviation. In this article, we have validated our modeling results with clinical measurements. The new approach for cross-validation is proposed by demonstrating the accuracy of our results with a validated product in a clinical environment.

  16. Solar Sail Models and Test Measurements Correspondence for Validation Requirements Definition

    NASA Technical Reports Server (NTRS)

    Ewing, Anthony; Adams, Charles

    2004-01-01

    Solar sails are being developed as a mission-enabling technology in support of future NASA science missions. Current efforts have advanced solar sail technology sufficient to justify a flight validation program. A primary objective of this activity is to test and validate solar sail models that are currently under development so that they may be used with confidence in future science mission development (e.g., scalable to larger sails). Both system and model validation requirements must be defined early in the program to guide design cycles and to ensure that relevant and sufficient test data will be obtained to conduct model validation to the level required. A process of model identification, model input/output documentation, model sensitivity analyses, and test measurement correspondence is required so that decisions can be made to satisfy validation requirements within program constraints.

  17. Use of integral experiments in support to the validation of JEFF-3.2 nuclear data evaluation

    NASA Astrophysics Data System (ADS)

    Leclaire, Nicolas; Cochet, Bertrand; Jinaphanh, Alexis; Haeck, Wim

    2017-09-01

    For many years now, IRSN has developed its own Monte Carlo continuous energy capability, which allows testing various nuclear data libraries. In that prospect, a validation database of 1136 experiments was built from cases used for the validation of the APOLLO2-MORET 5 multigroup route of the CRISTAL V2.0 package. In this paper, the keff obtained for more than 200 benchmarks using the JEFF-3.1.1 and JEFF-3.2 libraries are compared to benchmark keff values and main discrepancies are analyzed regarding the neutron spectrum. Special attention is paid on benchmarks for which the results have been highly modified between both JEFF-3 versions.

  18. Spanish translation, cross-cultural adaptation, and validation of the Questionnaire for Diabetes-Related Foot Disease (Q-DFD)

    PubMed Central

    Castillo-Tandazo, Wilson; Flores-Fortty, Adolfo; Feraud, Lourdes; Tettamanti, Daniel

    2013-01-01

    Purpose To translate, cross-culturally adapt, and validate the Questionnaire for Diabetes-Related Foot Disease (Q-DFD), originally created and validated in Australia, for its use in Spanish-speaking patients with diabetes mellitus. Patients and methods The translation and cross-cultural adaptation were based on international guidelines. The Spanish version of the survey was applied to a community-based (sample A) and a hospital clinic-based sample (samples B and C). Samples A and B were used to determine criterion and construct validity comparing the survey findings with clinical evaluation and medical records, respectively; while sample C was used to determine intra- and inter-rater reliability. Results After completing the rigorous translation process, only four items were considered problematic and required a new translation. In total, 127 patients were included in the validation study: 76 to determine criterion and construct validity and 41 to establish intra- and inter-rater reliability. For an overall diagnosis of diabetes-related foot disease, a substantial level of agreement was obtained when we compared the Q-DFD with the clinical assessment (kappa 0.77, sensitivity 80.4%, specificity 91.5%, positive likelihood ratio [LR+] 9.46, negative likelihood ratio [LR−] 0.21); while an almost perfect level of agreement was obtained when it was compared with medical records (kappa 0.88, sensitivity 87%, specificity 97%, LR+ 29.0, LR− 0.13). Survey reliability showed substantial levels of agreement, with kappa scores of 0.63 and 0.73 for intra- and inter-rater reliability, respectively. Conclusion The translated and cross-culturally adapted Q-DFD showed good psychometric properties (validity, reproducibility, and reliability) that allow its use in Spanish-speaking diabetic populations. PMID:24039434

  19. Validation of the Electromagnetic Code FACETS for Numerical Simulation of Radar Target Images

    DTIC Science & Technology

    2009-12-01

    Validation of the electromagnetic code FACETS for numerical simulation of radar target images S. Wong...Validation of the electromagnetic code FACETS for numerical simulation of radar target images S. Wong DRDC Ottawa...for simulating radar images of a target is obtained, through direct simulation-to-measurement comparisons. A 3-dimensional computer-aided design

  20. [Design and validation of a questionnaire on attitudes to prevention and health promotion in primary care (CAPPAP)].

    PubMed

    Ramos-Morcillo, Antonio Jesús; Martínez-López, Emilio J; Fernández-Salazar, Serafín; del-Pino-Casado, Rafael

    2013-12-01

    To develop and validate a questionnaire to measure attitudes towards prevention and health promotion. Cross-sectional study for the validation of a questionnaire. Primary Health Care (autonomous community of Andalusia, Spain). 282 professionals (nurses and doctors) belonging to the Public Health System. Content validation by experts, ceiling effects and floor effects, correlation between items, internal consistency, stability and exploratory factor analysis. The 56 items of the tool (CAPPAP) obtained, including those from the review of other tools and the contributions of the experts, were grouped into 5 dimensions. The percentage of expert agreement was over 70% on all items, and a high concordance between prevention and promotion item was obtained, thus, duplicates were removed leaving a final tool with 44 items. The internal consistency, measured by Cronbach's alpha, was 0.888. The test retest indicated concordance from substantial to almost perfect. Exploratory factor analysis identified five factors that accounted for 48.92% of the variance. CAPPAP is a tool that is quick and easy to administer, that is well accepted by professionals, and that has acceptable psychometric results, both globally and at the level of each dimension. Copyright © 2012 Elsevier España, S.L. All rights reserved.

  1. Development and Construct Validation of the Interprofessional Attitudes Scale

    PubMed Central

    Norris, Jeffrey; Carpenter, Joan G.; Eaton, Jacqueline; Guo, Jia-Wen; Lassche, Madeline; Pett, Marjorie A.; Blumenthal, Donald K.

    2015-01-01

    Purpose Training of health professionals requires development of interprofessional competencies and assessment of these competencies. No validated tools exist to assess all four competency domains described in the 2011 Core Competencies for Interprofessional Collaborative Practice (the IPEC Report). The purpose of this study was to develop and validate a scale based on the IPEC competency domains that assesses interprofessional attitudes of students in the health professions. Method In 2012, a survey tool was developed and administered to 1,549 students from the University of Utah Health Science Center, an academic health center composed of four schools and colleges (Health, Medicine, Nursing, and Pharmacy). Exploratory and confirmatory factor analyses (EFA and CFA) were performed to validate the assessment tool, eliminate redundant questions, and to identify subscales. Results The EFA and CFA focused on aligning subscales with IPEC core competencies, and demonstrating good construct validity and internal consistency reliability. A response rate of 45% (n = 701) was obtained. Responses with complete data (n=678) were randomly split into two datasets which were independently analyzed using EFA and CFA. The EFA produced a 27-item scale, with five subscales (Cronbach’s alpha coefficients: 0.62 to 0.92). CFA indicated the content of the five subscales was consistent with the EFA model. Conclusions The Interprofessional Attitudes Scale (IPAS) is a novel tool that, compared to previous tools, better reflects current trends in interprofessional competencies. The IPAS should be useful to health sciences educational institutions and others training people to work collaboratively in interprofessional teams. PMID:25993280

  2. Validation of Community Models: Identifying Events in Space Weather Model Timelines

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter

    2009-01-01

    I develop and document a set of procedures which test the quality of predictions of solar wind speed and polarity of the interplanetary magnetic field (IMF) made by coupled models of the ambient solar corona and heliosphere. The Wang-Sheeley-Arge (WSA) model is used to illustrate the application of these validation procedures. I present an algorithm which detects transitions of the solar wind from slow to high speed. I also present an algorithm which processes the measured polarity of the outward directed component of the IMF. This removes high-frequency variations to expose the longer-scale changes that reflect IMF sector changes. I apply these algorithms to WSA model predictions made using a small set of photospheric synoptic magnetograms obtained by the Global Oscillation Network Group as input to the model. The results of this preliminary validation of the WSA model (version 1.6) are summarized.

  3. Multiattribute health utility scoring for the computerized adaptive measure CAT-5D-QOL was developed and validated.

    PubMed

    Kopec, Jacek A; Sayre, Eric C; Rogers, Pamela; Davis, Aileen M; Badley, Elizabeth M; Anis, Aslam H; Abrahamowicz, Michal; Russell, Lara; Rahman, Md Mushfiqur; Esdaile, John M

    2015-10-01

    The CAT-5D-QOL is a previously reported item response theory (IRT)-based computerized adaptive tool to measure five domains (attributes) of health-related quality of life. The objective of this study was to develop and validate a multiattribute health utility (MAHU) scoring method for this instrument. The MAHU scoring system was developed in two stages. In phase I, we obtained standard gamble (SG) utilities for 75 hypothetical health states in which only one domain varied (15 states per domain). In phase II, we obtained SG utilities for 256 multiattribute states. We fit a multiplicative regression model to predict SG utilities from the five IRT domain scores. The prediction model was constrained using data from phase I. We validated MAHU scores by comparing them with the Health Utilities Index Mark 3 (HUI3) and directly measured utilities and by assessing between-group discrimination. MAHU scores have a theoretical range from -0.842 to 1. In the validation study, the scores were, on average, higher than HUI3 utilities and lower than directly measured SG utilities. MAHU scores correlated strongly with the HUI3 (Spearman ρ = 0.78) and discriminated well between groups expected to differ in health status. Results reported here provide initial evidence supporting the validity of the MAHU scoring system for the CAT-5D-QOL. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Validity and reliability of the Malay version of sleep apnea quality of life index – preliminary results

    PubMed Central

    2013-01-01

    Background The objective of this study was to determine the validity and reliability of the Malay translated Sleep Apnea Quality of Life Index (SAQLI) in patients with obstructive sleep apnea (OSA). Methods In this cross sectional study, the Malay version of SAQLI was administered to 82 OSA patients seen at the OSA Clinic, Hospital Universiti Sains Malaysia prior to their treatment. Additionally, the patients were asked to complete the Malay version of Medical Outcomes Study Short Form (SF-36). Twenty-three patients completed the Malay version of SAQLI again after 1–2 weeks to assess its reliability. Results Initial factor analysis of the 40-item Malay version of SAQLI resulted in four factors with eigenvalues >1. All items had factor loadings >0.5 but one of the factors was unstable with only two items. However, both items were maintained due to their high communalities and the analysis was repeated with a forced three factor solution. Variance accounted by the three factors was 78.17% with 9–18 items per factor. All items had primary loadings over 0.5 although the loadings were inconsistent with the proposed construct. The Cronbach’s alpha values were very high for all domains, >0.90. The instrument was able to discriminate between patients with mild or moderate and severe OSA. The Malay version of SAQLI correlated positively with the SF-36. The intraclass correlation coefficients for all domains were >0.90. Conclusions In light of these preliminary observations, we concluded that the Malay version of SAQLI has a high degree of internal consistency and concurrent validity albeit demonstrating a slightly different construct than the original version. The responsiveness of the questionnaire to changes in health-related quality of life following OSA treatment is yet to be determined. PMID:23786866

  5. Validation of Medicaid claims-based diagnosis of myocardial infarction using an HIV clinical cohort

    PubMed Central

    Brouwer, Emily S.; Napravnik, Sonia; Eron, Joseph J; Simpson, Ross J; Brookhart, M. Alan; Stalzer, Brant; Vinikoor, Michael; Floris-Moore, Michelle; Stürmer, Til

    2014-01-01

    Background In non-experimental comparative effectiveness research using healthcare databases, outcome measurements must be validated to evaluate and potentially adjust for misclassification bias. We aimed to validate claims-based myocardial infarction algorithms in a Medicaid population using an HIV clinical cohort as the gold standard. Methods Medicaid administrative data were obtained for the years 2002–2008 and linked to the UNC CFAR HIV Clinical Cohort based on social security number, first name and last name and myocardial infarction were adjudicated. Sensitivity, specificity, positive predictive value, and negative predictive value were calculated. Results There were 1,063 individuals included. Over a median observed time of 2.5 years, 17 had a myocardial infarction. Specificity ranged from 0.979–0.993 with the highest specificity obtained using criteria with the ICD-9 code in the primary and secondary position and a length of stay ≥ 3 days. Sensitivity of myocardial infarction ascertainment varied from 0.588–0.824 depending on algorithm. Conclusion: Specificities of varying claims-based myocardial infarction ascertainment criteria are high but small changes impact positive predictive value in a cohort with low incidence. Sensitivities vary based on ascertainment criteria. Type of algorithm used should be prioritized based on study question and maximization of specific validation parameters that will minimize bias while also considering precision. PMID:23604043

  6. A Compact Forearm Crutch Based on Force Sensors for Aided Gait: Reliability and Validity.

    PubMed

    Chamorro-Moriana, Gema; Sevillano, José Luis; Ridao-Fernández, Carmen

    2016-06-21

    Frequently, patients who suffer injuries in some lower member require forearm crutches in order to partially unload weight-bearing. These lesions cause pain in lower limb unloading and their progression should be controlled objectively to avoid significant errors in accuracy and, consequently, complications and after effects in lesions. The design of a new and feasible tool that allows us to control and improve the accuracy of loads exerted on crutches during aided gait is necessary, so as to unburden the lower limbs. In this paper, we describe such a system based on a force sensor, which we have named the GCH System 2.0. Furthermore, we determine the validity and reliability of measurements obtained using this tool via a comparison with the validated AMTI (Advanced Mechanical Technology, Inc., Watertown, MA, USA) OR6-7-2000 Platform. An intra-class correlation coefficient demonstrated excellent agreement between the AMTI Platform and the GCH System. A regression line to determine the predictive ability of the GCH system towards the AMTI Platform was found, which obtained a precision of 99.3%. A detailed statistical analysis is presented for all the measurements and also segregated for several requested loads on the crutches (10%, 25% and 50% of body weight). Our results show that our system, designed for assessing loads exerted by patients on forearm crutches during assisted gait, provides valid and reliable measurements of loads.

  7. Assessing personal initiative among vocational training students: development and validation of a new measure.

    PubMed

    Balluerka, Nekane; Gorostiaga, Arantxa; Ulacia, Imanol

    2014-11-14

    Personal initiative characterizes people who are proactive, persistent and self-starting when facing the difficulties that arise in achieving goals. Despite its importance in the educational field there is a scarcity of measures to assess students' personal initiative. Thus, the aim of the present study was to develop a questionnaire to assess this variable in the academic environment and to validate it for adolescents and young adults. The sample comprised 244 vocational training students. The questionnaire showed a factor structure including three factors (Proactivity-Prosocial behavior, Persistence and Self-Starting) with acceptable indices of internal consistency (ranging between α = .57 and α =.73) and good convergent validity with respect to the Self-Reported Initiative scale. Evidence of external validity was also obtained based on the relationships between personal initiative and variables such as self-efficacy, enterprising attitude, responsibility and control aspirations, conscientiousness, and academic achievement. The results indicate that this new measure is very useful for assessing personal initiative among vocational training students.

  8. Dependence and physical exercise: Spanish validation of the Exercise Dependence Scale-Revised (EDS-R).

    PubMed

    Sicilia, Alvaro; González-Cutre, David

    2011-05-01

    The purpose of this study was to validate the Spanish version of the Exercise Dependence Scale-Revised (EDS-R). To achieve this goal, a sample of 531 sport center users was used and the psychometric properties of the EDS-R were examined through different analyses. The results supported both the first-order seven-factor model and the higher-order model (seven first-order factors and one second-order factor). The structure of both models was invariant across age. Correlations among the subscales indicated a related factor model, supporting construct validity of the scale. Alpha values over .70 (except for Reduction in Other Activities) and suitable levels of temporal stability were obtained. Users practicing more than three days per week had higher scores in all subscales than the group practicing with a frequency of three days or fewer. The findings of this study provided reliability and validity for the EDS-R in a Spanish context.

  9. Calibration and validation of rainfall thresholds for shallow landslide forecasting in Sicily, southern Italy

    NASA Astrophysics Data System (ADS)

    Gariano, S. L.; Brunetti, M. T.; Iovine, G.; Melillo, M.; Peruccacci, S.; Terranova, O.; Vennari, C.; Guzzetti, F.

    2015-01-01

    Empirical rainfall thresholds are tools to forecast the possible occurrence of rainfall-induced shallow landslides. Accurate prediction of landslide occurrence requires reliable thresholds, which need to be properly validated before their use in operational warning systems. We exploited a catalogue of 200 rainfall conditions that have resulted in at least 223 shallow landslides in Sicily, southern Italy, in the 11-year period 2002-2011, to determine regional event duration-cumulated event rainfall (ED) thresholds for shallow landslide occurrence. We computed ED thresholds for different exceedance probability levels and determined the uncertainty associated to the thresholds using a consolidated bootstrap nonparametric technique. We further determined subregional thresholds, and we studied the role of lithology and seasonal periods in the initiation of shallow landslides in Sicily. Next, we validated the regional rainfall thresholds using 29 rainfall conditions that have resulted in 42 shallow landslides in Sicily in 2012. We based the validation on contingency tables, skill scores, and a receiver operating characteristic (ROC) analysis for thresholds at different exceedance probability levels, from 1% to 50%. Validation of rainfall thresholds is hampered by lack of information on landslide occurrence. Therefore, we considered the effects of variations in the contingencies and the skill scores caused by lack of information. Based on the results obtained, we propose a general methodology for the objective identification of a threshold that provides an optimal balance between maximization of correct predictions and minimization of incorrect predictions, including missed and false alarms. We expect that the methodology will increase the reliability of rainfall thresholds, fostering the operational use of validated rainfall thresholds in operational early warning system for regional shallow landslide forecasting.

  10. A semi-automatic method for left ventricle volume estimate: an in vivo validation study

    NASA Technical Reports Server (NTRS)

    Corsi, C.; Lamberti, C.; Sarti, A.; Saracino, G.; Shiota, T.; Thomas, J. D.

    2001-01-01

    This study aims to the validation of the left ventricular (LV) volume estimates obtained by processing volumetric data utilizing a segmentation model based on level set technique. The validation has been performed by comparing real-time volumetric echo data (RT3DE) and magnetic resonance (MRI) data. A validation protocol has been defined. The validation protocol was applied to twenty-four estimates (range 61-467 ml) obtained from normal and pathologic subjects, which underwent both RT3DE and MRI. A statistical analysis was performed on each estimate and on clinical parameters as stroke volume (SV) and ejection fraction (EF). Assuming MRI estimates (x) as a reference, an excellent correlation was found with volume measured by utilizing the segmentation procedure (y) (y=0.89x + 13.78, r=0.98). The mean error on SV was 8 ml and the mean error on EF was 2%. This study demonstrated that the segmentation technique is reliably applicable on human hearts in clinical practice.

  11. [Spanish version of the Satisfaction With Decision scale: cross-cultural adaptation, validity and reliability].

    PubMed

    Chabrera, Carolina; Areal, Joan; Font, Albert; Caro, Mónica; Bonet, Marta; Zabalegui, Adelaida

    2015-01-01

    The aim of this study is to develop a Spanish version of the Satisfaction With Decision scale (SWDs) and analyse the psychometric properties of validity and reliability. An observational, descriptive study and validation of a tool to measure satisfaction with the decision. Urology, Radiation oncology, and Medical oncology Departments of the Hospital Universitari Germans Trias i Pujol, Institut Català d'Oncologia and the Institut Oncològic del Vallès - Hospital General de Catalunya. A total of 170 participants diagnosed with prostate cancer, and who could read and write in Spanish and gave their informed consent. A translation, back-translation and cross-cultural adaptation to Spanish was performed on the SWDs. The content validity, criterion validity, construct validity and reliability (internal consistency and stability) of the Spanish version were evaluated. The SWDs contains 6 items with 5-item Likert scales. A Spanish version (ESD) was obtained that was linguistically and conceptually equivalent to the original version. Criterion validity, the ESD correlated with "satisfaction with the decision" using a linear analogue scale, was significant (r=0.63, P<.01) for all items. The factorial analysis showed a unique dimension to explain 82.08% of the variance. The ESD showed excellent results in terms of internal consistency (Cronbach alpha=0.95) and good test-retest reliability with intraclass correlation coefficient of 0.711. The ESD is a validated Spanish scale to measure the satisfaction with the decisions taken in health, and demonstrates a correct validity and reliability. Copyright © 2015 Elsevier España, S.L.U. All rights reserved.

  12. The validation of a human force model to predict dynamic forces resulting from multi-joint motions

    NASA Technical Reports Server (NTRS)

    Pandya, Abhilash K.; Maida, James C.; Aldridge, Ann M.; Hasson, Scott M.; Woolford, Barbara J.

    1992-01-01

    The development and validation is examined of a dynamic strength model for humans. This model is based on empirical data. The shoulder, elbow, and wrist joints were characterized in terms of maximum isolated torque, or position and velocity, in all rotational planes. This data was reduced by a least squares regression technique into a table of single variable second degree polynomial equations determining torque as a function of position and velocity. The isolated joint torque equations were then used to compute forces resulting from a composite motion, in this case, a ratchet wrench push and pull operation. A comparison of the predicted results of the model with the actual measured values for the composite motion indicates that forces derived from a composite motion of joints (ratcheting) can be predicted from isolated joint measures. Calculated T values comparing model versus measured values for 14 subjects were well within the statistically acceptable limits and regression analysis revealed coefficient of variation between actual and measured to be within 0.72 and 0.80.

  13. [Reliability for detection of developmental problems using the semaphore from the Child Development Evaluation test: Is a yellow result different from a red result?

    PubMed

    Rizzoli-Córdoba, Antonio; Ortega-Ríosvelasco, Fernando; Villasís-Keever, Miguel Ángel; Pizarro-Castellanos, Mariel; Buenrostro-Márquez, Guillermo; Aceves-Villagrán, Daniel; O'Shea-Cuevas, Gabriel; Muñoz-Hernández, Onofre

    The Child Development Evaluation (CDE) is a screening tool designed and validated in Mexico for detecting developmental problems. The result is expressed through a semaphore. In the CDE test, both yellow and red results are considered positive, although a different intervention is proposed for each. The aim of this work was to evaluate the reliability of the CDE test to discriminate between children with yellow/red result based on the developmental domain quotient (DDQ) obtained through the Battelle Development Inventory, 2nd edition (in Spanish) (BDI-2). The information was obtained for the study from the validation. Children with a normal (green) result in the CDE were excluded. Two different cut-off points of the DDQ were used (BDI-2): <90 to include low average, and developmental delay was considered with a cut-off<80 per domain. Results were analyzed based on the correlation of the CDE test and each domain from the BDI-2 and by subgroups of age. With a cut-off DDQ<90, 86.8% of tests with yellow result (CDE) indicated at least one domain affected and 50% 3 or more compared with 93.8% and 78.8% for red result, respectively. There were differences in every domain (P<0.001) for the percent of children with DDQ<80 between yellow and red result (CDE): cognitive 36.1% vs. 61.9%; communication: 27.8% vs. 50.4%, motor: 18.1% vs. 39.9%; personal-social: 20.1% vs. 28.9%; and adaptive: 6.9% vs. 20.4%. The semaphore result yellow/red allows identifying different magnitudes of delay in developmental domains or subdomains, supporting the recommendation of different interventions for each one. Copyright © 2014 Hospital Infantil de México Federico Gómez. Publicado por Masson Doyma México S.A. All rights reserved.

  14. [Academic and psycho-socio-familiar factors associated to anxiety and depression in university students. Reliability and validity of a questionnaire].

    PubMed

    Balanza Galindo, Serafín; Morales Moreno, Isabel; Guerrero Muñoz, Joaquín; Conesa Conesa, Ana

    2008-01-01

    The high frequency of anxiety and depression in university students is related to social, family factors and academic stress. The aim of this research is to determine the internal consistency and validity of a questionnaire on socio-familiar and academic situations which may be related to psychopathological situations in university students. The research was carried out at the Universidad Católica San Antonio de Murcia with 700 students, to whom a questionnaire made by the researchers was given. This questionnaire included items which evaluated academic and socio-familiar aspects. Variables regarding various stressful situations amongst students, and the Goldbergs level of anxiety and depression scale were used as independent facts of research in order to measure the validity of the questionnaire. The reliability of the questionnaire was shown after obtaining an intraclass correlation coefficient of 0.819. The original questionnaire with 19 items was reduced to 15 items after the homogeneity analysis, obtaining a Cronbach alpha of 0.758. The validity of constructio was evaluated with the factor analysis of the questionnaire, with a result of two factors which represented academic aspects and socio-familiar aspects. Those students with a positive anxiety and depression test were the ones who obtained the higher score on the global questionnaire and in both factors, proving the validity of the criteria. The research questionnaire is an agile and easy to use tool for the assessment and early detection of anxiety and depression in university students.

  15. Development of Learning Models Based on Problem Solving and Meaningful Learning Standards by Expert Validity for Animal Development Course

    NASA Astrophysics Data System (ADS)

    Lufri, L.; Fitri, R.; Yogica, R.

    2018-04-01

    The purpose of this study is to produce a learning model based on problem solving and meaningful learning standards by expert assessment or validation for the course of Animal Development. This research is a development research that produce the product in the form of learning model, which consist of sub product, namely: the syntax of learning model and student worksheets. All of these products are standardized through expert validation. The research data is the level of validity of all sub products obtained using questionnaire, filled by validators from various field of expertise (field of study, learning strategy, Bahasa). Data were analysed using descriptive statistics. The result of the research shows that the problem solving and meaningful learning model has been produced. Sub products declared appropriate by expert include the syntax of learning model and student worksheet.

  16. Automated Smartphone Threshold Audiometry: Validity and Time Efficiency.

    PubMed

    van Tonder, Jessica; Swanepoel, De Wet; Mahomed-Asmail, Faheema; Myburgh, Hermanus; Eikelboom, Robert H

    2017-03-01

    Smartphone-based threshold audiometry with automated testing has the potential to provide affordable access to audiometry in underserved contexts. To validate the threshold version (hearTest) of the validated hearScreen™ smartphone-based application using inexpensive smartphones (Android operating system) and calibrated supra-aural headphones. A repeated measures within-participant study design was employed to compare air-conduction thresholds (0.5-8 kHz) obtained through automated smartphone audiometry to thresholds obtained through conventional audiometry. A total of 95 participants were included in the study. Of these, 30 were adults, who had known bilateral hearing losses of varying degrees (mean age = 59 yr, standard deviation [SD] = 21.8; 56.7% female), and 65 were adolescents (mean age = 16.5 yr, SD = 1.2; 70.8% female), of which 61 had normal hearing and the remaining 4 had mild hearing losses. Threshold comparisons were made between the two test procedures. The Wilcoxon signed-ranked test was used for comparison of threshold correspondence between manual and smartphone thresholds and the paired samples t test was used to compare test time. Within the adult sample, 94.4% of thresholds obtained through smartphone and conventional audiometry corresponded within 10 dB or less. There was no significant difference between smartphone (6.75-min average, SD = 1.5) and conventional audiometry test duration (6.65-min average, SD = 2.5). Within the adolescent sample, 84.7% of thresholds obtained at 0.5, 2, and 4 kHz with hearTest and conventional audiometry corresponded within ≤5 dB. At 1 kHz, 79.3% of the thresholds differed by ≤10 dB. There was a significant difference (p < 0.01) between smartphone (7.09 min, SD = 1.2) and conventional audiometry test duration (3.23 min, SD = 0.6). The hearTest application with calibrated supra-aural headphones provides a cost-effective option to determine valid air-conduction hearing thresholds. American Academy of Audiology

  17. Validation of Reverse-Engineered and Additive-Manufactured Microsurgical Instrument Prototype.

    PubMed

    Singh, Ramandeep; Suri, Ashish; Anand, Sneh; Baby, Britty

    2016-12-01

    With advancements in imaging techniques, neurosurgical procedures are becoming highly precise and minimally invasive, thus demanding development of new ergonomically aesthetic instruments. Conventionally, neurosurgical instruments are manufactured using subtractive manufacturing methods. Such a process is complex, time-consuming, and impractical for prototype development and validation of new designs. Therefore, an alternative design process has been used utilizing blue light scanning, computer-aided designing, and additive manufacturing direct metal laser sintering (DMLS) for microsurgical instrument prototype development. Deviations of DMLS-fabricated instrument were studied by superimposing scan data of fabricated instrument with the computer-aided designing model. Content and concurrent validity of the fabricated prototypes was done by a group of 15 neurosurgeons by performing sciatic nerve anastomosis in small laboratory animals. Comparative scoring was obtained for the control and study instrument. T test was applied to the individual parameters and P values for force (P < .0001) and surface roughness (P < .01) were found to be statistically significant. These 2 parameters were further analyzed using objective measures. Results depicts that additive manufacturing by DMLS provides an effective method for prototype development. However, direct application of these additive-manufactured instruments in the operating room requires further validation. © The Author(s) 2016.

  18. Bias Correction of MODIS AOD using DragonNET to obtain improved estimation of PM2.5

    NASA Astrophysics Data System (ADS)

    Gross, B.; Malakar, N. K.; Atia, A.; Moshary, F.; Ahmed, S. A.; Oo, M. M.

    2014-12-01

    MODIS AOD retreivals using the Dark Target algorithm is strongly affected by the underlying surface reflection properties. In particular, the operational algorithms make use of surface parameterizations trained on global datasets and therefore do not account properly for urban surface differences. This parameterization continues to show an underestimation of the surface reflection which results in a general over-biasing in AOD retrievals. Recent results using the Dragon-Network datasets as well as high resolution retrievals in the NYC area illustrate that this is even more significant at the newest C006 3 km retrievals. In the past, we used AERONET observation in the City College to obtain bias-corrected AOD, but the homogeneity assumptions using only one site for the region is clearly an issue. On the other hand, DragonNET observations provide ample opportunities to obtain better tuning the surface corrections while also providing better statistical validation. In this study we present a neural network method to obtain bias correction of the MODIS AOD using multiple factors including surface reflectivity at 2130nm, sun-view geometrical factors and land-class information. These corrected AOD's are then used together with additional WRF meteorological factors to improve estimates of PM2.5. Efforts to explore the portability to other urban areas will be discussed. In addition, annual surface ratio maps will be developed illustrating that among the land classes, the urban pixels constitute the largest deviations from the operational model.

  19. Non-invasive transcranial ultrasound therapy based on a 3D CT scan: protocol validation and in vitro results

    NASA Astrophysics Data System (ADS)

    Marquet, F.; Pernot, M.; Aubry, J.-F.; Montaldo, G.; Marsac, L.; Tanter, M.; Fink, M.

    2009-05-01

    A non-invasive protocol for transcranial brain tissue ablation with ultrasound is studied and validated in vitro. The skull induces strong aberrations both in phase and in amplitude, resulting in a severe degradation of the beam shape. Adaptive corrections of the distortions induced by the skull bone are performed using a previous 3D computational tomography scan acquisition (CT) of the skull bone structure. These CT scan data are used as entry parameters in a FDTD (finite differences time domain) simulation of the full wave propagation equation. A numerical computation is used to deduce the impulse response relating the targeted location and the ultrasound therapeutic array, thus providing a virtual time-reversal mirror. This impulse response is then time-reversed and transmitted experimentally by a therapeutic array positioned exactly in the same referential frame as the one used during CT scan acquisitions. In vitro experiments are conducted on monkey and human skull specimens using an array of 300 transmit elements working at a central frequency of 1 MHz. These experiments show a precise refocusing of the ultrasonic beam at the targeted location with a positioning error lower than 0.7 mm. The complete validation of this transcranial adaptive focusing procedure paves the way to in vivo animal and human transcranial HIFU investigations.

  20. [Effect of a multidisciplinar protocol on the clinical results obtained after bariatric surgery].

    PubMed

    Cánovas Gaillemin, B; Sastre Martos, J; Moreno Segura, G; Llamazares Iglesias, O; Familiar Casado, C; Abad de Castro, S; López Pardo, R; Sánchez-Cabezudo Muñoz, M A

    2011-01-01

    Bariatric surgery has been shown to be an effective therapy for weight loss in patients with severe obesity, and the implementation of a multidisciplinar management protocol is recommended. To assess the usefulness of the implementation of a management protocol in obesity surgery based on the Spanish Consensus Document of the SEEDO. Retrospective comparative study of the outcomes in patients previously operated (51 patients) and after the implementation of the protocol (66 patients). The following data were gathered: anthropometry, pre-and post-surgery comorbidities, post-surgical nutritional and surgical complications, validated Quality of Life questionnaire, and dietary habits. Withdrawals (l7.6%) and alcoholism (5.8%) were higher in patients pre- versus post-implementation of the protocol (4.5% vs. 3%, respectively), the differences being statistically significant. The mortality rate was 2% in the pre-protocol group and 0% in the postprotocol group. The dietary habits were better in the post-protocol group, the pre-protocol group presenting a higher percentage of feeding-behavior disorders (5.1%) although not reaching a statistical significance. The improvement in quality of life was higher in the post-protocol group for all items, but only reaching statistical significance in sexual activity (p = 0.004). In the pre-protocol group, 70.5% of the patients had more than one nutritional complication vs. 32.8% in the post-protocol group (p < 0.05). There were no differences regarding the percentage of weight in excess lost at two years (> 50% in 81.3% in the pre-protocol group vs. 74.8% in the pos-protocol group) or the comorbidities. Bariatric surgery achieves excellent outcomes in weight loss, comorbidities, and quality of life, but presents nutritional, surgical, and psychiatric complications that require a protocol-based and multidisciplinary approach. Our protocol improves the outcomes regarding the withdrawal rates, feeding-behavior disorders, dietary habits